How to use multiple context indexes with LLM

Hi, Not sure where is the best place to ask this question? which is a question in itself I guess. I am trying to build multiple contexts (specialist knowledge indexes), one for each topic area using embeddings and FAISS. The idea is to create a library of different specialist knowledge topics for use by a LLM. The idea is to combine appropriate combinations at runtime to provide specific contexts to the query. I can see a few ways this might be possible. But I am thinking the most efficient might be to conduct similarity searches across the multiple relevant indexes, and somehow combine those results, then possibly prune out lower ranking results. The motivation is to only use indexes that are relevant to the question to lower the noise to information ration, to reduce computational overhead, and to move away from one big index for everything. Does anyone have any ideas on how this might be most efficiently achieved conceptually? I don’t have code to share at this point because its a question of assessing conceptually.