Note that this is a very simple library and not very efficient. E.g. for the code that filters an array, it would run N prompts[1]:
`You are a filter agent.\nYour job is to return whether an item matches the criteria: ${criteria}\nRespond only with true or false.`
It's a cool demo, but I wouldn't use that in production; IMO having that code in a separate library offers little benefit and increases the risk of misuse.
How does this differ from function calling? For example, the basic enums example for Gemini function calling:
> color_temp: {
type: Type.STRING,
enum: ['daylight', 'cool', 'warm'],
description: 'Color temperature of the light fixture, which can be `daylight`, `cool` or `warm`.',
}
I'm curious how the hallucination-free guarantee works? Does it only guarantee that the output is a subset of the input?
In the case of the male names, if I include a gender-neutral name like "Sam" does that include it because it is a male name, or exclude it because it is a female name? Can I set this to be inclusive or exclusive?
There is a filter for `createFilter` [1] and there is a throw if the index of the array doesn't exist for `createSelector` [2]. Maybe this is what the author refers to as hallucination-free, but falls pretty short.
It can still hallucinate a response that is defined in the filter.
E.g if you have a filter with names of capital cities [“London”, “Paris”, “Madrid”] , and you ask “What is the capital of France” it could respond “Madrid”
Note that this is a very simple library and not very efficient. E.g. for the code that filters an array, it would run N prompts[1]:
It's a cool demo, but I wouldn't use that in production; IMO having that code in a separate library offers little benefit and increases the risk of misuse.[1]: https://github.com/montyanderson/incant/blob/73606e826d6e5b0...
So this is just asking an LLM to filter or select from an array? Where do the magic spells come in?
How does this differ from function calling? For example, the basic enums example for Gemini function calling:
> color_temp: { type: Type.STRING, enum: ['daylight', 'cool', 'warm'], description: 'Color temperature of the light fixture, which can be `daylight`, `cool` or `warm`.', }
https://ai.google.dev/gemini-api/docs/function-calling?examp...
It’s the inverse of function calling. Here the function is calling the LLM, not vice versa.
I'm curious how the hallucination-free guarantee works? Does it only guarantee that the output is a subset of the input?
In the case of the male names, if I include a gender-neutral name like "Sam" does that include it because it is a male name, or exclude it because it is a female name? Can I set this to be inclusive or exclusive?
Looks interesting, though. Nice work.
There is a filter for `createFilter` [1] and there is a throw if the index of the array doesn't exist for `createSelector` [2]. Maybe this is what the author refers to as hallucination-free, but falls pretty short.
[1]: https://github.com/montyanderson/incant/blob/master/mod.ts#L...
[2]: https://github.com/montyanderson/incant/blob/master/mod.ts#L...
Yeah, so it's just guaranteeing that the output is a subset of the inputs, thanks for the clarification.
vibecoding is a hell of a drug
> no hallucinations possible
It can still hallucinate a response that is defined in the filter.
E.g if you have a filter with names of capital cities [“London”, “Paris”, “Madrid”] , and you ask “What is the capital of France” it could respond “Madrid”
Is that a hallucination, or is it just plain wrong?
An AI hallucination is any response that contains false or misleading information presented as fact. So a wrong answer is an hallucination.