I have a knowledge graph (KG) that is stored in a .json file in the ./data directory. I'm creating the index using the code below:
import os
os.environ["OPENAI_API_KEY"] = "INSERT OPENAI KEY"
from llama_index import (
SimpleDirectoryReader,
ServiceContext,
KnowledgeGraphIndex,
)
from llama_index.graph_stores import SimpleGraphStore
from llama_index.llms import OpenAI
from llama_index.storage.storage_context import StorageContext
documents = SimpleDirectoryReader("./data").load_data()
llm = OpenAI(temperature=0, model="text-davinci-002")
service_context = ServiceContext.from_defaults(llm=llm, chunk_size=512)
graph_store = SimpleGraphStore()
storage_context = StorageContext.from_defaults(graph_store=graph_store)
index = KnowledgeGraphIndex.from_documents(
documents,
max_triplets_per_chunk=2,
storage_context=storage_context,
service_context=service_context,
)
Is this the correct way of loading in a KG in the form of a .json file? Or is there a more intentional way of creating the index based on a .json file?