Amazon Neptune graph database service launched in preview

Amazon just launched a limited preview of Amazon Neptune, a new graph database service aimed at helping users gain helpful insights about their data by analyzing the relationships between different datasets. So in essence, the new tool stores billions of those relationships and can then query the graph to come up with insights about your data.

Here’s some more information about the tool and how you can get started using it.

Features of Amazon Neptune

amazon neptune

The graph database stores a number of vertices and notes the relationships or connections between those different vertices, which can both have properties stored as key-value pairs. This type of database can be especially useful for evaluating data that is connected, contextual, or relationship-driven.

More specifically, Amazon Neptune supports two main standards for describing and querying the data in your graph database, including Apache TinkerPop3 style Property Graphs queried with the graph traversal language Gremlin, and Resource Description Framework queried with declarative language SPARQL.

How to use Amazon Neptune

To launch, you just need to navigate to the Neptune console, where there’s a launch wizard available. From there, you can name your first instance and select the instance type. Then you can configure the advanced options, which are similar to other AWS database services like Amazon Relational Database Service or Amazon ElastiCache.

Additionally, if you have already have applications that work with SPARQL or TinkerPop, you can start using Neptune by updating the endpoint that you connect those applications to.

Within Amazon Neptune, you can also configure additional options like parameter group, port, and cluster name, as well as managing abilities like KMS based encryption-at-rest, failover priority, and a backup retention time.

Once your instances are done provisioning, you can choose your connection endpoint, using either Gremlin or SPARQL, on the Details page of the cluster. Once all of that is done, you should be set to run some queries.

Photo credit: Pixabay

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top