DEV Community

Play Button Pause Button
Marcus Kohlberg for Encore

Posted on

Enabling OpenAI to call functions in your app (TypeScript / Node.js)

Watch the video on YouTube:
https://www.youtube.com/watch?v=uX-p5xP8fqA

Full example code on GitHub, with instructions: https://github.com/encoredev/examples...

See more ways to enhance your apps with AI, check out Encore's open source templates: https://github.com/encoredev/examples

Top comments (6)

Collapse
 
joelbonetr profile image
JoelBonetR 🥇 • • Edited

I was interested on reading a post about what the title says, instead I found a link to a Youtube video.
I want to believe we all understand these are two different target audiences (or similar audience in different time windows). It's not bad that you add a link to the video, or even embed it

as you did above. Now if I'd like to watch a video instead of reading I'd probably be on youtube directly. Or maybe I am reading Dev.to because I cannot access youtube or turn on the volume where I am at. Evaluating just the text above, this is a very low quality post by all means. One could well use AI to transcribe the video double check-it, correct the format and generate a high quality post with low to medium effort.

Collapse
 
marcuskohlberg profile image
Marcus Kohlberg Encore •

Thanks for the feedback! This was posted using Dev.to's "video" feature, which I presume is intended to post videos and puts less emphasis on the written part. I agree the UX around what is a "video post" and what is a "blog post" is very unclear.

Collapse
 
johnsawiris profile image
John Sawiris •

Pretty impressive!
I wonder how the response time can improve, I noticed in the demo it took some time to process the user prompt, would caching be a good option in this case?

Thanks for sharing!

Collapse
 
marcuskohlberg profile image
Marcus Kohlberg Encore •

The response time is likely due to OpenAI's API needing to process the request. Caching may help speed up responses for repeat questions, and may be a good idea to minimize the use of the OpenAI API since it is not free/unlimited.

Collapse
 
pedrodevoto profile image
Pedro Devoto •

Pretty cool. How many requests to OpenAI were there? Is it only one or is it one after each function invocation?

Collapse
 
simonjohansson profile image
Simon Johansson Encore •

Hey! There is one request to OpenAI after each function invocation. So if you allow the LLM to call a lot of functions in your system you can potentially get a lot of request to OpenAI for each prompt. You should also try to limit the output of your functions because the LLM will read through it all and that eat a lot of tokens.