It'd be great if this was focused on utilizing an open-source model like Llama 2 instead of GPT-4... #24
Replies: 3 comments 11 replies
-
Hey @rossjohnson87, sorry for the delay. Thank you so much for the feedback and a good luck. Yes, what you're saying definitely makes sense. We are reviewing a PR that will enable this - basically to pipe any LLM to GPT Pilot. However, I'm not sure how far will it go without the best LLM. The costs really are quite large but I wanted to see how far can AI go in building an app so using the best makes sense. There is still a lot of research that needs to be done before GPT Pilot can create a really meaningful app. I think we need more research on how to develop GPT Pilot so it can work at scale, rather than testing different LLMs to see how good can they be. Nevertheless, I think you'll be able to play with other LLMs and GPT pilot soon. |
Beta Was this translation helpful? Give feedback.
-
Indeed, striking a balance between capabilities and accessibility is crucial, and your suggestion of favoring open source models for their cost-effectiveness and potential for wider community engagement is a valuable one. It will be interesting to see how projects like these evolve and adapt in the future. Your feedback and insights are appreciated, and I'm sure the developers will take them into consideration. Good luck with your endeavors as well! |
Beta Was this translation helpful? Give feedback.
-
Feel free to test GPT Pilot with Llama by configuring https://github.com/Pythagora-io/gpt-pilot/blob/main/pilot/.env.example Local LLMs may already be supported. Apparently you just need to set |
Beta Was this translation helpful? Give feedback.
-
GPT-4 is in so many other projects and has a cost associated with it. This is to develop a POC to prove out what many other similar projects have tried to as well, and most of them all have GPT-4 as the backbone model. I've come to the conclusion that having a proprietary API is probably going to lead to the same result the others have encountered; ton of hype at first, people try it once and spend $10-15 on tokens and then never touch it again as soon as the very next project with a similar goal comes along.
For sustainability, while the open source models are less capable, the fact that you can run them essentially for free would lead to much, much more participation and further experimentation by the community (and add a longer lifespan to the interest of the project by the community). The common thread in these projects are how quickly token usage grows as the AI's attempts get more and more complex. While using the best model seems like it would lead to less overall token usage since it's the 'smartest' I think honestly not being constrained by tokens will lead to the most experimentation by the community, and hopefully the most productive results.
I wish you good luck and will be watching! Just a suggestion, after all.
Beta Was this translation helpful? Give feedback.
All reactions