I built a very basic endpoint POC that gets a dish name and returns an analysis from OpenAI GPT-4 API about the ingredients and their nutritinal value.
First step - Proof of concept - Connecting to OpenAI GPT-4 API
Tue 15 Aug
I built a very basic endpoint POC that gets a dish name and returns an analysis from OpenAI GPT-4 API about the ingredients and their nutritinal value.
The bare minimum implementation
- OpenAI API Key. In order to use the OpenAI GPT-4 API, you need an API key, so sign up and get your API key from here: https://platform.openai.com/account/api-keys
- We are building a basic node.js Express server that communicates with the OpenAI API and reads the API key from an environment variable. So the project was initialised with the following npm commands:
$ npm init -y
$ npm install --save express openai body-parser dotenv
The main entry point for our server is index.js I installed nodemon globally so I can run the server with the following command:
$ nodemon index.js Then I added a script to package.json so I can run the server with npm:
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"startdev": "nodemon index.js"
},
So, now I can run the server with:
$ npm run startdev
And it will restart automatically when I make changes to the code. 4. Environment variables: .env includes OPENAI_API_KEY and ./prompts/recipe.txt includes the system prompt. 5. The service: ./services/recipeService.js includes the code that connects to the OpenAI GPT-4 API and returns the response. 6. The router: ./routers/recipeRouter.js includes the code that defines the endpoint. At this stage, we are defining a POST endpoint that accept Content-Type application/json, receives a dishName parameter in the body in the following format:
{
"dishName": "Spaghetti Carbonara"
}
- That's it! Let's start looking into the basic code that makes this work.
The code
Let's start from the end.
services/recipeService.js
Setup:
constfs=require("fs");
const { Configuration, OpenAIApi } =require("openai");
require("dotenv").config();
constconfiguration=newConfiguration({
apiKey:process.env.OPENAI_API_KEY,
});
constopenai=newOpenAIApi(configuration);
The function that connects to the OpenAI API:
exports.generateRecipe=async (dishName) => {
// implementation here. Details below
}
Inside the function, we read the system prompt from the recipe.txt file:
constsystemPrompt=fs.readFileSync("./prompts/recipe.txt", "utf8");
Next, we call the OpenAI API:
constresponse=awaitopenai.createChatCompletion({
model:"gpt-4",
messages: [
{
role:"system",
content:systemPrompt,
},
{
role:"user",
content:`Dish name: ${dishName}`,
},
],
temperature:0.1,
max_tokens:256,
top_p:1,
frequency_penalty:0,
presence_penalty:0,
});
return response.data;
routers/recipeRouter.js
Setup:
const express = require("express");
const router = express.Router();
const recipeService = require("../services/recipeService");
The endpoint:
router.post("/generateRecipe", async (req, res) => {
try {
const dishName = req.body.dishName;
const response = await recipeService.generateRecipe(dishName);
res.status(200).send(response);
} catch (error) {
res.status(500).send(error);
}
});
What's happening in that router.post definition?
- We are defining a POST endpoint with the path /generateRecipe
- We are using the async/await syntax to call the recipeService.generateRecipe function
- We are sending the response back to the client with res.status(200).send(response)
And last but not least, export the module:
module.exports = router;
index.js
Setup:
const express = require("express");
const bodyParser = require("body-parser");
const recipeRouter = require("./routers/recipeRouter");
// Create an Express app
const app = express();
// Use JSON middleware to automatically parse JSON
app.use(bodyParser.json());
The endpoint:
// Register the recipe router
app.use("/api/recipe", recipeRouter);
Start the server:
// Start the server on port 3003
const PORT = process.env.PORT || 3003;
app.listen(PORT, () => {
console.log(`Server is running on port ${PORT}`);
});
prompt/recipe.txt - The system prompt
You are an nutritional expert. I am going to provide you with food recipes. Please reply with an analysis of the nutritinal values and dietary information related to the food recipe I ask you about.
Testing with Postman
Open Postman and add a new request of type POST
The development server has been defined to listen on port 3003, so the URL should be http://localhost:3003/api/recipe/generateRecipe
Add the following header: Content-Type: application/json (Content type is the header name and application/json is the value)
Add the following Body in raw format:
{
"dishName": "Spaghetti Carbonara"
}Send the request and you should get a response from the OpenAI API with the analysis of the nutritinal values and dietary information related to the food recipe you asked about.
If you get a 401 error, it usually means something is wrong with your API key.
Next steps -
We are going to create a more general purpose solution that would allow the creation of multiple endpoints and multiple system prompts.
Ideas from co-pilot. Some are actually good!
- Add more endpoints to the router
- Add more system prompts to the recipe.txt file
- Add more parameters to the request body
- Add more parameters to the OpenAI API call
- Add more error handling
- Add more tests
- Add more comments to the code
- Add more documentation
- Add more logging
- Add more security
- Add more performance optimisations
- Add more scalability
- Add more monitoring
- Add more CI/CD
- Add more automation
- Add more infrastructure as code
- Add more cloud
- Add more containers
- Add more Kubernetes
- Add more serverless
- Add more machine learning
- Add more AI
- Add more blockchain
- Add more IoT
- Add more AR/VR
- Add more quantum computing
- Add more space
- Add more time
- Add more dimensions
- Add more universes
- Add more multiverses
- Add more infiniteverses
- Add more infiniteverses
- Add more infiniteverses
- Add more infiniteverses
- Add more infiniteverses
- Add more :D
