cURL to JavaScript: the modern developer's guide
JavaScript runs the web.
Yet, almost every API documentation you read starts with a curl command.
It is the industry standard for showing what a request looks like, but it tells you nothing about how to implement it in your React frontend or Node.js backend.
For years, developers relied on clunky libraries like request or xmlhttprequest. But in 2025, the ecosystem has matured. With the native Fetch API available in Node.js 18+ and all modern browsers, converting cURL to JavaScript has never been cleaner.
Whether you are building a Next.js scraper or a client-side dashboard, mastering this translation is essential.
This guide will turn you from a copy-paster into a network request master.
Table of contents
- The universal language vs the web language
- The anatomy: cURL vs fetch
- Method 1: The native way (Fetch API)
- Method 2: The robust way (Axios)
- Method 3: The automated way
- Handling complex scenarios
- The CORS nightmare
- Monitoring your requests
The universal language vs the web language
curl is imperative. It says “do this now.”
JavaScript is asynchronous. It says “start this, and let me know when it’s done.”
This fundamental difference—The Promise—is where most developers stumble when translating commands. You cannot just run a request line-by-line; you have to handle the future result of that request.
Why use JavaScript for requests?
- Isomorphic: Run the exact same code in the browser and the server.
- Async/Await: Clean, non-blocking syntax that handles high concurrency.
- JSON Native: JavaScript Object Notation is… well, JavaScript. No parsing libraries needed.
The anatomy: cURL vs fetch
The fetch function is the modern standard. It takes two arguments: the URL and an “options” object.
The Translation Matrix:
| Feature | Curl Flag | JavaScript (Fetch) |
|---|---|---|
| Method | -X POST | method: 'POST' |
| URL | "https://api.com" | fetch('https://api.com', ...) |
| Headers | -H "Content-Type: application/json" | headers: { 'Content-Type': 'application/json' } |
| JSON Data | -d '{"a": 1}' | body: JSON.stringify({ a: 1 }) |
| Auth | -u "user:pass" | headers: { 'Authorization': 'Basic ' + btoa('user:pass') } |
| Follow Redirects | -L | redirect: 'follow' (default) |
| Insecure | -k | Requires custom https.Agent (Node only) |
Method 1: The native way (Fetch API)
Let’s convert a real-world example. Here is a curl command to search for companies on Perplexity.
The cURL Command:
curl https://api.perplexity.ai/chat/completions \
-H "Authorization: Bearer $PERPLEXITY_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "llama-3-sonar-large-32k-online",
"messages": [
{
"role": "user",
"content": "Find me 5 AI startups in San Francisco"
}
]
}'
The JavaScript Translation:
const makeRequest = async () => {
const url = "https://api.perplexity.ai/chat/completions";
try {
const response = await fetch(url, {
method: "POST",
headers: {
Authorization: `Bearer ${process.env.PERPLEXITY_API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
model: "llama-3-sonar-large-32k-online",
messages: [
{
role: "user",
content: "Find me 5 AI startups in San Francisco",
},
],
}),
});
if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status}`);
}
const data = await response.json();
console.log(data);
} catch (error) {
console.error("Fetch failed:", error);
}
};
makeRequest();
Key “Gotchas” with Fetch:
- Double Await: You await the request to get headers, then you MUST
await response.json()to read the body. - Error Handling:
fetchdoes not throw an error on 404 or 500 status codes. It only throws on network failure. You must manually checkif (!response.ok). - Stringify: You must explicitly wrap your object in
JSON.stringify().
Method 2: The robust way (Axios)
For production applications, many teams prefer Axios. It handles the edge cases that fetch ignores.
- Automatic JSON parsing (no double await).
- Throws errors on 4xx/5xx status codes automatically.
- Better upload/download progress monitoring.
The same request in Axios:
import axios from "axios";
const makeRequest = async () => {
try {
const response = await axios.post(
"https://api.perplexity.ai/chat/completions",
{
model: "llama-3-sonar-large-32k-online",
messages: [{ role: "user", content: "Find me 5 AI startups" }],
},
{
headers: {
Authorization: `Bearer ${process.env.PERPLEXITY_API_KEY}`,
"Content-Type": "application/json",
},
},
);
// Data is already parsed
console.log(response.data);
} catch (error) {
// Error handling is cleaner
console.error("Error status:", error.response?.status);
console.error("Error data:", error.response?.data);
}
};
If you are building complex scrapers that need to handle Google AI Overview layouts or timeouts, Axios is often the safer choice.
Method 3: The automated way
Do not translate complex headers by hand. You will miss a quote or a bracket. Use tools.
1. CurlConverter
The best in the business.
- Paste your curl command into curlconverter.com.
- Select “Node-fetch” or “JavaScript”.
- Copy the output.
2. Postman
- Paste the cURL into Postman (“Import” -> “Raw Text”).
- Click the
</>(Code) button. - Select “JavaScript - Fetch” or “Nodejs - Axios”.
3. VS Code Extensions
Extensions like “REST Client” allow you to highlight a cURL command in your editor and run it, or generate a snippet directly.
Handling complex scenarios
1. Cookies and Sessions
Unlike Python’s requests.Session(), fetch is stateless. It does not remember cookies between requests.
If you are logging in and then scraping a dashboard, you need to manage the Cookie header manually.
// Step 1: Login
const loginRes = await fetch('https://api.site.com/login', { ... });
const cookie = loginRes.headers.get('set-cookie');
// Step 2: Use cookie
const dataRes = await fetch('https://api.site.com/data', {
headers: {
'Cookie': cookie
}
});
Note: In a browser environment, the browser handles this automatically (credentials: ‘include’). In Node.js, you have to do the work.
2. Aborting Requests (Timeouts)
fetch does not have a native timeout option. You must use an AbortController.
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), 5000); // 5s timeout
try {
const res = await fetch(url, { signal: controller.signal });
} catch (err) {
if (err.name === "AbortError") {
console.log("Request timed out");
}
} finally {
clearTimeout(timeoutId);
}
3. Parallel Requests
JavaScript shines here. You can fire off 10 requests at once using Promise.all. This is vital for tasks like query fanout.
const ids = [1, 2, 3, 4, 5];
const promises = ids.map((id) => fetch(`https://api.com/items/${id}`));
const responses = await Promise.all(promises);
const data = await Promise.all(responses.map((res) => res.json()));
The CORS nightmare
If you copy a curl command that works in your terminal, paste it into your browser console, and it fails with a red error, you have met CORS (Cross-Origin Resource Sharing).
The Rule: Browsers block frontend code from making requests to a different domain unless that domain explicitly allows it.
curl doesn’t care about CORS. Neither does Node.js. But Chrome does.
How to fix it:
- Server-Side Proxy: Don’t call the API from the browser. Call your own Next.js API route (
/api/proxy), which then calls the external API. - CORS Headers: If you own the API, add
Access-Control-Allow-Origin: *. - Dev Mode: Use a browser extension to disable CORS (development only!).
Monitoring your requests
Converting curl to JavaScript is just the start. Once your application is firing thousands of requests to scrape Google or query LLMs, you need visibility.
- Are your requests being blocked?
- Is the API schema changing?
- Are your AI responses accurate?
cloro provides the observability layer for your AI interactions.
While you use JavaScript to build the pipes, Cloro monitors what flows through them. It tracks your brand’s visibility across AI engines, ensuring that when Perplexity or ChatGPT answers a user, they are citing your data correctly.
Code is just the mechanism. Insight is the goal.
Master the fetch API, but don’t forget to watch the results.