You can check the details in the post!
I recently tested the knowledge retrieval capabilities of Claude3 and ChatGPT4 by providing them with a specific information source - the Wikipedia page on the 96th Academy Awards. Since this awards ceremony is the most recent one, the models would not have had prior knowledge of it and would need to rely solely on the information in the provided link.
I asked both models questions like “Who were the winners?” and “Can you summarize the awards?” Both Claude3 and ChatGPT4 provided reasonably good responses, demonstrating their ability to retrieve and synthesize the relevant information.
However, the key difference I observed is that Claude3 is significantly more cost-effective, costing about one-third the price of ChatGPT4. This suggests that Claude3 may be the more practical and efficient choice for knowledge-based tasks, especially when budget is a consideration.