As the world’s first graph database system built from the outset to support the massively parallel calculation of queries and analytics, TigerGraph is genuinely capable of delivering real-time analytics on web-scale data. From its first benchmarking tests, the software was shown to load a data batch in one hour, as opposed to the current leading solution which required a 24-hour period.
TigerGraph’s patented Native Parallel Graph™ (NPG) design is centered around storage and computation, which supports real-time graph updates, as well as built-in parallel computation.
GSQL – A graph query language for today’s Big Data world
With TigerGraph’s high-level GSQL language, developers can create queries and sophisticated graph analytics. It works similarly to an SQL graph query language, allowing for easy exploration and interactive analysis of large datasets.
GSQL is a query language for expressing graph analytics that is high level and Turing-complete, featuring an SQL-like syntax that makes it easier for SQL programmers to learn, while also supporting the ‘Map-Reduce’ approach favoured by NoSQL developers. What’s more, as it is built with scalable, massively parallel evaluation in mind, it gives users the ability to describe any algorithm.
This optimises storage efficiency and query speed, and supports data-independent app/query development.
Built-in High-Performance Parrellelism
For achieving the fastest results.
Provides familiarity to the more than one million users of this coding language.
Conventional control flow (FOR, WHILE, IF/ELSE)
Supports easy implementation of conventional algorithms.
Procedural queries calling queries
Enables flexible parameterised queries, which can then be used to build more complex queries.
Transactional graph updates
Allows Hybrid Transactional/Analytical Processing (HTAP) with data updates in real time.
Very Large Graphs (VLG’s)
TigerGraph offers graph parallel algorithms for Very Large Graphs (VLGs), enabling parallelism for large scale graph analytics. This provides the user with a significant technological advantage that further increases as graphs inevitably grow larger.
TigerGraph works efficiently for fast, limited queries that may touch anywhere from a small section of a graph to millions of vertices and edges, as well as very complex analysis queries that must touch every last vertex in a graph. Unlike other solutions, TigerGraph’s real-time incremental updates give it powerful, up-to-the-second analytic capability.
MPP Computational Model
In TigerGraph’s system, the edges and vertices of a graph work together as parallel units of storage and computation. This approach means that the graph becomes not just a static collection of stored data, but moreover, a massively parallel computational engine where all vertices communicate with each other using edges.
With every vertex or edge storing any amount of random information, TigerGraph’s system can execute parallel compute functions on everyone, which takes advantage of the key features of multi-core CPU machines and in-memory computing.
An important element of TigerGraph is that it supports a number of graph partitioning algorithms.
More often than not, automatic partitioning performed on input data gives excellent results without the need for tuning and optimisation. However, TigerGraph provides added flexibility that allows for even better application performance thanks to its application-specific and mixed partitioning strategies.
TigerGraph’s system also has the capability to run several graph engines as an ‘active-active’ network. This allows each graph engine to host identical graphs with different partitioning algorithms, customised for the different application queries. From here, front-end server (usually a REST server) directs application queries to specific graph engines, depending on the query type.
A truly transformational system
TigerGraph gives users the advantage with its system that represents graphs as computational models. It does this by associating compute functions with each individual vertex and edge in a graph, turning them into active parallel compute-storage elements.
In this way, vertices in the graph can communicate by exchanging messages via edges — very much like how neurons operate in the human brain — which ultimately delivers fast and massively parallel computation.