In an ideal world, you would be able to have a contextId that you would pass to OpenAI prompt calls. And be able to manage that context separately.
So you would pass it code files (with expiration dates)
And you could also provide a list conversationIDs, so when providing an answer for a particular prompt request, GPT knows what previous prompts and responses to consider.
As of right now, I've never used the API as a developer, but I've heard that you have to provide the ENTIRE context with EVERY prompt request.
How do you work around that?
As of right now, I've never used the API as a developer, but I've heard that you have to provide the ENTIRE context with EVERY prompt request. How do you work around that?