Releases: alexeichhorn/typegpt
Releases · alexeichhorn/typegpt
0.3.5
0.3.4
Fixed decoding issue
0.3.3
Full support for gpt-4o-mini
0.3.2
- Support for gpt-4o (including new tokenizer)
- Fixed maximum allowed tokens of standard
gpt-3.5-turbo
model (now correctly 16K) LLMException
that are thrown contain valuable information inside the object such as compiled system and user prompt
0.3.1
Support for openai library version >= 1.20.0
0.3
- Support for easy and typed few-shot prompting. Just implement the
few_shot_examples
function inside your prompt:
class ExamplePrompt(PromptTemplate):
class Output(BaseLLMResponse):
class Ingredient(BaseLLMResponse):
name: str
quantity: int
ingredients: list[Ingredient]
def system_prompt(self) -> str:
return "Given a recipe, extract the ingredients."
def few_shot_examples(self) -> list[FewShotExample[Output]]:
return [
FewShotExample(
input="Let's take two apples, three bananas, and four oranges.",
output=self.Output(ingredients=[
self.Output.Ingredient(name="apple", quantity=2),
self.Output.Ingredient(name="banana", quantity=3),
self.Output.Ingredient(name="orange", quantity=4),
])
),
FewShotExample(
input="My recipe requires five eggs and two cups of flour.",
output=self.Output(ingredients=[
self.Output.Ingredient(name="egg", quantity=5),
self.Output.Ingredient(name="flour cups", quantity=2),
])
)
]
def user_prompt(self) -> str:
...
- You can disable automatically added system instruction that instructs LLM how to format output. Only use this if you use few-shot prompting, a fine-tuned model, or instruct the model yourself (not recommended). Use it like this:
class ExamplePrompt(PromptTemplate):
settings = PromptSettings(disable_formatting_instructions=True)
...
- Support for new
gpt-4-turbo-2024-04-09
/gpt-4-turbo
model
0.2.2
Support for new gpt-3.5-turbo-0125
model version
0.2.1
Fully supports new gpt-4-0125-preview
model and new gpt-4-turbo-preview
model alias.
0.2
You can now nest response types. Note that you need to use BaseLLMArrayElement
for classes that you want to nest inside a list. To add instructions inside an element of BaseLLMArrayElement
, you must use LLMArrayElementOutput
instead of LLMOutput
.
class Output(BaseLLMResponse):
class Item(BaseLLMArrayElement):
class Description(BaseLLMResponse):
short: str | None
long: str
title: str
description: Description
price: float = LLMArrayElementOutput(instruction=lambda pos: f"The price of the {pos.ordinal} item")
items: list[Item]
count: int
Bug fix
Fixed issue where TypeOpenAI
were not importable