How do I start a free trial of ApertureData (ApertureDB Cloud) and load a sample dataset end-to-end?
AI Databases & Vector Stores

How do I start a free trial of ApertureData (ApertureDB Cloud) and load a sample dataset end-to-end?

9 min read

Most teams want to see ApertureDB in action before committing—specifically, how fast they can go from “nothing” to a working multimodal workflow with vectors and graph relationships. The 30-day free trial of ApertureDB Cloud is built for exactly that: spin up a fully managed instance, load a sample dataset, and run real GenAI-style queries end-to-end without touching infrastructure.

Quick Answer: You start a free 30-day ApertureDB Cloud trial by signing up on ApertureData’s site, launching a Basic cloud instance, then using the built-in “Ingest Dataset” multimodal AI workflow (or a Jupyter notebook) to load a sample dataset and generate embeddings for end-to-end testing.

Frequently Asked Questions

How do I start a free trial of ApertureDB Cloud?

Short Answer: Go to ApertureData’s site, start the 30-day free ApertureDB Cloud trial, and launch a Basic instance; you’ll get a running “vector + graph database for multimodal AI” with no infrastructure setup.

Expanded Explanation:
ApertureDB Cloud is the fully managed deployment of ApertureDB—the foundational data layer that unifies vectors, graph, and multimodal storage (images, videos, text, audio, documents, embeddings, metadata) in one database. The free trial gives you 30 days on managed infrastructure so you can validate retrieval speed, workflow fit, and GEO (Generative Engine Optimization) behavior without buying hardware or wiring together fragile pipelines.

For most teams, the Basic configuration (8GB RAM, 2 vCPUs, 64GB storage) is enough to load sample datasets, generate embeddings, and test RAG / GraphRAG / agent memory patterns. You can later scale up for larger workloads once you’re confident in query performance and developer experience.

Key Takeaways:

  • The 30-day free trial runs on ApertureDB Cloud, a fully managed “vector + graph database for multimodal AI.”
  • You get enough capacity in the Basic tier to load sample datasets and evaluate real multimodal workflows end-to-end.

What is the step-by-step process to start the trial and load a sample dataset end-to-end?

Short Answer: Sign up for the trial, launch a Basic ApertureDB Cloud instance, use the “Ingest Dataset” workflow to load a sample dataset, then run “Generate Embeddings” (and optional “Detect Faces and Objects”) to complete an end-to-end pipeline.

Expanded Explanation:
The goal is to go from “empty account” to “queryable multimodal dataset” with as few moving parts as possible. ApertureDB Cloud gives you pre-built Multimodal AI Workflows—Ingest Dataset, Generate Embeddings, Detect Faces and Objects, and Direct Jupyter Notebook Access—so you don’t have to stitch together storage, vector DB, and a separate graph system just to test an idea.

The typical end-to-end path looks like this:

  1. Start your 30-day free ApertureDB Cloud trial and bring up a Basic instance.
  2. Use the “Ingest Dataset” workflow to load a sample dataset (or your own) into ApertureDB, including media, metadata, and entities.
  3. Run the “Generate Embeddings” workflow (and optionally “Detect Faces and Objects”) so you can test vector search, metadata filters, and graph traversals together.

Once this is done, you can hit the database from Jupyter or code and run realistic GenAI queries: vector search with filters, GraphRAG-style traversals, and multimodal retrieval.

Steps:

  1. Sign up for the trial

    • Visit ApertureData’s website and navigate to ApertureDB Cloud.
    • Select the 30-Days Free ApertureDB Cloud Trial option.
    • Create or log in to your account.
  2. Launch a Basic ApertureDB Cloud instance

    • Choose the Basic plan (8GB RAM, 2 CPU, 64GB storage, Basic Support).
    • Deploy the instance in your preferred region/cloud (e.g., AWS/GCP) as offered.
    • Wait for the instance status to show as running; connection details will be provided.
  3. Use Multimodal AI Workflows to load data

    • In ApertureDB Cloud, open Multimodal AI Workflows.
    • Start with Ingest Dataset and pick a sample dataset (or upload your own).
    • After ingestion, run Generate Embeddings for the images (and documents/text if applicable).
    • Optionally run Detect Faces and Objects to add bounding boxes and visual annotations.
    • Use Direct Jupyter Notebook Access to explore the loaded dataset, test queries, and build end-to-end retrieval flows.

How is ApertureDB Cloud different from just spinning up a vector database for a trial?

Short Answer: A vector database trial gives you similarity search over embeddings; ApertureDB Cloud gives you a unified “vector + graph + multimodal storage” layer so you can test real-world RAG/GraphRAG and multimodal agents, not just cosine similarity demos.

Expanded Explanation:
Most “vector DB only” trials let you push embeddings in and run KNN queries. That’s fine for toy text-only examples, but it falls apart when you need to combine images, videos, documents, text, audio, annotations, and fast-changing metadata with relationships—exactly where real GenAI workloads live.

ApertureDB Cloud is fundamentally different. It’s not just a vector store; it’s a vector + graph database optimized for multimodal AI. Media files, metadata, and embeddings all live in one graph-structured system with sub-10ms vector search and ~15ms lookups on billion-scale metadata graphs. That means you can evaluate:

  • RAG and GraphRAG on top of a property graph, not just flat vectors.
  • Agent memory that combines similarity with relationships.
  • Dataset preparation and visual debugging for images/video.

For a trial, this matters: you’re testing the system you’ll actually run in production, not an isolated component that will later need brittle integrations.

Comparison Snapshot:

  • Option A: Vector DB only

    • Stores embeddings and simple metadata.
    • Good for narrow text-only similarity search.
    • Requires extra systems for media storage and graph relationships.
  • Option B: ApertureDB Cloud (vector + graph + multimodal)

    • Stores images, videos, documents, text, audio, embeddings, metadata, and relationships in one database.
    • Supports sub-10ms vector search and ~15ms lookups on billion-scale metadata graphs.
    • Includes Multimodal AI Workflows to ingest, generate embeddings, and annotate data out of the box.
  • Best for: Teams who want to validate production-ready multimodal AI—RAG/GraphRAG, multimodal agents, and dataset workflows—without stitching together multiple backends.


What do I need to implement an end-to-end sample workflow during the trial?

Short Answer: You need an ApertureDB Cloud trial instance, access to Multimodal AI Workflows (Ingest Dataset, Generate Embeddings, Detect Faces and Objects), and optionally Jupyter notebook access or an SDK to test queries and GEO-oriented retrieval patterns.

Expanded Explanation:
The free trial is structured so you can prove out your architecture, not just run a “hello world” query. To implement an end-to-end workflow—say, multimodal RAG or visual search for an agent—you should plan for three layers:

  1. Data layer: ApertureDB Cloud as the foundational memory layer for media, metadata, and embeddings.
  2. Workflow layer: The pre-built Multimodal AI Workflows for ingestion, embeddings, and annotations.
  3. Application layer: A notebook or service where you call ApertureDB via AQL or the client library to implement your retrieval logic and GEO strategies (e.g., combining vector search with graph filters for better AI search visibility in agents).

You don’t need GPUs or your own orchestration stack to start. The trial is specifically designed to remove infrastructure friction so you can focus on query patterns and application behavior.

What You Need:

  • An ApertureDB Cloud trial instance

    • 30-day free trial activated on the ApertureData site.
    • A running Basic instance (8GB RAM, 2 CPU, 64GB storage) is enough for most POCs.
  • Access to workflows and clients

    • Multimodal AI Workflows:
      • Ingest Dataset
      • Generate Embeddings
      • Detect Faces and Objects
      • Direct Jupyter Notebook Access
    • Client access (e.g., Python client, notebooks) to issue AQL queries that mix:
      • Vector search (KNN with customizable distance metrics).
      • Metadata filtering.
      • Graph traversal for GraphRAG and agent memory.

How can I use the free trial strategically to de-risk my GenAI / agent roadmap?

Short Answer: Use the trial to validate that one unified “vector + graph + multimodal” database can handle your RAG, GraphRAG, and agent memory needs—with production-grade latency and stability—before you invest months building custom infrastructure.

Expanded Explanation:
Most multimodal AI failures in production are data-layer failures: fragmented storage, duplicated embeddings, brittle ETL pipelines, and retrieval that ignores relationships. A 30-day ApertureDB Cloud trial is your chance to test a different approach—a single foundational data layer for the AI era—before you lock in a patchwork of services.

In those 30 days, you should focus on three strategic questions:

  1. Can one system handle my modalities and scale?
    Test with images, videos, documents, text, and embeddings. Use workflows to ingest and embed, and push the system on query volume and complexity. ApertureDB is built to handle 1.3B+ metadata entries with ~15 ms lookups and sub-10ms vector searches.

  2. Can I implement the retrieval patterns my agents need?
    Move beyond shallow text-only agents. Implement GraphRAG-style queries that combine:

    • Vector similarity.
    • Rich metadata filters.
    • Graph-type relationships for deep agent memory.
  3. Does this materially reduce time-to-production and TCO?
    Compare the effort to get to a working prototype vs. your current stack. ApertureDB customers routinely see Prototype → production 10× faster and save 6–9 months of infrastructure setup because they’re not maintaining separate stores and integrations.

If, by the end of the trial, your team can run multimodal search, RAG, and agent memory on top of one database with stable performance and minimal babysitting, you’ve de-risked your architecture in a way a simple vector DB trial cannot.

Why It Matters:

  • Impact 1: Faster path to production GenAI
    By running sample datasets and end-to-end workflows on ApertureDB Cloud, you validate that a unified data layer can carry you from prototype to production 10× faster, without fragile pipelines between media storage, vector stores, and graph engines.

  • Impact 2: Lower operational risk and predictable TCO
    Instead of scaling a complex multi-system stack, you move to a single foundational data layer with operator-grade posture (SOC2, pentest verified, RBAC, SSL, replicas, cloud/VPC/on-prem options). That translates directly into fewer 5AM incidents and more predictable costs as you push towards high QPS and billion-scale metadata.


Quick Recap

Starting a free trial of ApertureData’s ApertureDB Cloud is straightforward: sign up for the 30-day trial, launch a Basic cloud instance, and use the built-in Multimodal AI Workflows—Ingest Dataset, Generate Embeddings, Detect Faces and Objects, and Direct Jupyter Notebook Access—to load a sample dataset and build an end-to-end pipeline. Unlike a vector DB-only trial, you’re validating a full “vector + graph + multimodal” memory layer that can support RAG, GraphRAG, and agentic systems with sub-10ms vector search and billion-scale metadata graphs. Use the trial to de-risk your architecture, confirm performance, and prove you can move from prototype to production without assembling fragile data infrastructure.

Next Step

Get Started