Gradio error: AttributeError: 'str' object has no attribute 'index_store'

Hi Everyone,
Currently I am trying to upload a model to huggingface.
But when I try, the following error appears:

The error appeared when I changed from:

“GPTVectorStoreIndex.load_from_disk” to “load_index_from_storage”
because Huggingface does not seem to support the latter.

CouldI receive some tips on how to solve this problem?

INFO:matplotlib.font_manager:generated new fontManager
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.

Downloading (…)olve/main/vocab.json:   0%|          | 0.00/1.04M [00:00<?, ?B/s]
Downloading (…)olve/main/vocab.json: 100%|██████████| 1.04M/1.04M [00:00<00:00, 119MB/s]

Downloading (…)olve/main/merges.txt:   0%|          | 0.00/456k [00:00<?, ?B/s]
Downloading (…)olve/main/merges.txt: 100%|██████████| 456k/456k [00:00<00:00, 106MB/s]

Downloading (…)/main/tokenizer.json:   0%|          | 0.00/1.36M [00:00<?, ?B/s]
Downloading (…)/main/tokenizer.json: 100%|██████████| 1.36M/1.36M [00:00<00:00, 369MB/s]

Downloading (…)lve/main/config.json:   0%|          | 0.00/665 [00:00<?, ?B/s]
Downloading (…)lve/main/config.json: 100%|██████████| 665/665 [00:00<00:00, 679kB/s]
INFO:llama_index.indices.loading:Loading all indices.
Traceback (most recent call last):
  File "app.py", line 53, in <module>
    load_index()
  File "app.py", line 37, in load_index
    index = load_index_from_storage('index.json',service_context=service_context)
  File "/home/user/.local/lib/python3.8/site-packages/llama_index/indices/loading.py", line 33, in load_index_from_storage
    indices = load_indices_from_storage(storage_context, index_ids=index_ids, **kwargs)
  File "/home/user/.local/lib/python3.8/site-packages/llama_index/indices/loading.py", line 64, in load_indices_from_storage
    index_structs = storage_context.index_store.index_structs()
AttributeError: 'str' object has no attribute 'index_store'

My code is as follows:

import os
from llama_index import SimpleDirectoryReader, GPTListIndex, GPTVectorStoreIndex, LLMPredictor, PromptHelper, ServiceContext, StorageContext, load_index_from_storage
#from langchain.chat_models import ChatOpenAI
from langchain import OpenAI
import gradio as gr
import random
import time
import sys

os.environ["OPENAI_API_KEY"] = 'my key'
messages = [{"role": "system", "content": "follow the three instructions below for your outputs:"},
{"role": "system", "content": "1. make sure all expressions are compatible with Japanese"},
{"role": "system", "content": "2. replying in English is extrictly forbidden"},
{"role": "system", "content": "3. use Japanese only for outputs"}
]


    
def load_index():
    max_input_size = 4096
    max_chunk_overlap = 20
    chunk_size_limit = 600
    num_outputs = 768
    prompt_helper = PromptHelper(max_input_size, num_outputs, max_chunk_overlap, chunk_size_limit=chunk_size_limit)
    llm_predictor = LLMPredictor(llm=OpenAI(temperature=0.6, model_name="text-davinci-003", max_tokens=num_outputs))
    global index
    service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper)
    index = load_index_from_storage('index.json',service_context=service_context)
    return ("indexing finished")

def chat(chat_history, user_input):
  bot_response = index.query(user_input,response_mode="compact")
  print("Q:",user_input)
  response = ""
  for letter in ''.join(bot_response.response):
      response += letter + ""
      yield chat_history + [(user_input, response)]
  print("A:",response)



with gr.Blocks() as demo:
    gr.Markdown('AI chat(β 0.1)')
    load_index()
    with gr.Tab("chatbot"):
          chatbot = gr.Chatbot()
          message = gr.Textbox ()
          message.submit(chat, [chatbot, message], chatbot)


demo.queue(max_size=100).launch(share= True)

Thank you in advance for your kind help.

gentle bump.