Lambdas / Netlify
⭐ Recommended: Use the Quonfig CLI to generate TypeScript definitions for type-safe access to your flags and configs:
npx @quonfig-com/cli generate --targets node-ts
Choosing an Approach
The first step is to choose between a client-side style or server-side style approach. We've written a blog post that goes into detail about choosing how to use Quonfig with Netlify.
Feature Flags in Lambdas: The Browser-Like Approach
A practical solution is to treat Netlify functions similar to a browser. Quonfig's Javascript client, for instance, caches flag evaluations per user in a CDN. Here's a sample code snippet for this approach:
- TypeScript (Recommended)
- JavaScript
import { quonfig, Context } from "@quonfig-com/javascript";
export default async (req: Request, context: any) => {
const clientOptions = {
sdkKey: process.env.QUONFIG_FRONTEND_SDK_KEY!, // client SDK key
context: new Context({ user: { key: "1234" } }), // user context
};
await quonfig.init(clientOptions); // initialize with context
if (quonfig.isEnabled("my-flag")) { // check feature flag
// Your code here
}
return new Response("ok");
};
import { quonfig, Context } from "@quonfig-com/javascript";
export default async (req, context) => {
const clientOptions = {
sdkKey: process.env.QUONFIG_FRONTEND_SDK_KEY,
context: new Context({ user: { key: "1234" } }),
};
await quonfig.init(clientOptions);
if (quonfig.isEnabled("my-flag")) {
// Your code here
}
return new Response("ok");
};
In our testing from a Netlify function we see results around a 50ms latency initially and around then 10ms for each subsequent request for the same context. That may be too slow for some applications, but it's a good starting point and very easy to set up.
The nice thing about this solution is that you're going to get instant updates when you change a flag. The next request will have up to date data.
The Server-Side Alternative
Alternatively, you can implement a server-side strategy using the Quonfig NodeJS client. The key will be configuring our client to disable background updates and background telemetry, then performing an update on our own timeline.
Here's a sample code snippet for this approach:
- ⭐ TypeScript + Generated Types (Recommended)
- TypeScript
- JavaScript
First, generate your types:
npx @quonfig-com/cli generate --targets node-ts
Then set up your Lambda with full type safety:
import { Quonfig, type Contexts } from "@quonfig-com/node";
import { QuonfigTypesafeNode } from "./generated/quonfig-server";
const baseQuonfig = new Quonfig({
sdkKey: process.env.QUONFIG_BACKEND_SDK_KEY!,
enableSSE: false, // we don't want any background process in our function
enablePolling: false, // we'll handle updates ourselves
collectLoggerCounts: false, // turn off background telemetry
contextUploadMode: "none", // turn off background telemetry
collectEvaluationSummaries: false, // turn off background telemetry
});
// initialize once on cold start
await baseQuonfig.init();
// Create typed instance
const quonfig = new QuonfigTypesafeNode(baseQuonfig);
export default async (req: Request, context: any) => {
const { userId } = context.params;
const quonfigContext: Contexts = { user: { key: userId } };
// Use type-safe methods with context
if (quonfig.myFlag(quonfigContext)) {
// Your code here with full type safety
}
const userConfig = quonfig.userSpecificConfig(quonfigContext);
// every 60 seconds, check for updates in-process
baseQuonfig.updateIfStalerThan(60 * 1000);
return new Response("ok");
};
export const config = { path: "/users/:userId" };
import { Quonfig, type Contexts } from "@quonfig-com/node";
const quonfig = new Quonfig({
sdkKey: process.env.QUONFIG_BACKEND_SDK_KEY!, // server SDK key
enableSSE: false, // we don't want any background process in our function
enablePolling: false, // we'll handle updates ourselves
collectLoggerCounts: false, // turn off background telemetry
contextUploadMode: "none", // turn off background telemetry
collectEvaluationSummaries: false, // turn off background telemetry
});
// initialize once on cold start
await quonfig.init(); // load configuration
export default async (req: Request, context: any) => {
const { userId } = context.params; // extract user ID from URL
const quonfigContext: Contexts = { user: { key: userId } }; // create user context
return quonfig.inContext(quonfigContext, (rf) => {
if (rf.isFeatureEnabled("my-flag")) { // context-aware feature flag
// Your code here
}
// every 60 seconds, check for updates in-process
quonfig.updateIfStalerThan(60 * 1000); // conditional update
return new Response("ok");
});
};
export const config = { path: "/users/:userId" }; // URL pattern
import { Quonfig } from "@quonfig-com/node";
const quonfig = new Quonfig({
sdkKey: process.env.QUONFIG_BACKEND_SDK_KEY,
enableSSE: false, // we don't want any background process in our function
enablePolling: false, // we'll handle updates ourselves
collectLoggerCounts: false, // turn off background telemetry
contextUploadMode: "none", // turn off background telemetry
collectEvaluationSummaries: false, // turn off background telemetry
});
// initialize once on cold start
await quonfig.init();
export default async (req, context) => {
const { userId } = context.params;
const quonfigContext = { user: { key: userId } };
return quonfig.inContext(quonfigContext, (rf) => {
if (rf.isFeatureEnabled("my-flag")) {
// Your code here
}
// every 60 seconds, check for updates in-process
quonfig.updateIfStalerThan(60 * 1000);
return new Response("ok");
});
};
export const config = { path: "/users/:userId" };
With this approach, most of our requests will be fast, but we'll have a periodic update that will take a bit longer. This is about 50ms in my testing from a Netlify function. We're entirely in control of the frequency here, so it's a judgment call on how real-time you want your feature flag updates. You could even disable the updates altogether if tail latency is of utmost concern and you didn't mind redeploying to update your flags.