Linear Thinking
const entries = await getPosts({filters, ...params})
Building an MVP: How to reconcile Breadth vs Depth
01/23/2024
Planning for Near-Free AI
01/27/2024
Life and Times at Rubric Labs
07/23/2023
Agency 101: How to Convert Leads
11/26/2023
Multi-Staging → Local to Prod in Record Time
02/16/2024
My Summer at Rubric
09/08/2024
Launch: Create Rubric App
11/01/2023
Leveraging AI to create personalized video at scale
12/19/2023
Fine-tuning GPT-4o-mini for Spam Detection
by: Ted
TLDR: We fine-tuned a small LLM on a few dozen spam tags and it's working well. Disclaimer on naming: we called it ros-spam, which is hopefully more scalable than OpenAI's gpt-3-3.5-turbo-4-4o-06-23-2024-11-20 or Anthropic's sonnet-3-3.5-3.5-but-better. Hindsight is 20/20. What We Built We created a purpose-built spam detection model specifically for our schema: type Contact = { company: string; email: string; message: string; // ... } While it's currently tailored to our needs, the approach could be generalized for broader email spam detection. The Problem We get lots of inbound messages. Most are spam. Our workflow for triaging them was simple but inefficient: 1. Someone submits a message on rubriclabs.com/contact 2. We get a Slack notification 3. Someone on our team manually flags it as spam or legitimate Here's the kicker: even at just 30 seconds per day, this adds up to hours per year (not to mention the lingering cost of context-switching) making it worthwhile to automate in a post-Cursor world. The Solution: Fine-tuning The data from our spam flags was simply stored in Postgres, creating what would become our training dataset: { "message": "We sell the best leather couches", "status" : "👎" } // ... { "message": "Looking to build an agentic flight booking system", "status": "👍" } Given the hundreds of upvotes/downvotes, deduped and cleaned (a 10-minute process, given the simple schema), fine-tuning on OpenAI was a straightforward process. The Technical Details The fine-tuning schema follows a standard chat message format: type Message = { role: { role: "user" | "assistant" | "system"; content: string; } } The actual examples are stored as JSONL, a file format where each line is valid JSON. The actual process was refreshingly simple: 1. Write our array of examples to a .jsonl file 2. Upload the file 3. Wait ~10 minutes 4. Pay ~$1 5. Profit? Does It Work? Quantitative Evaluation We did a head-to-head comparison between GPT-4o and ros-spam: * We held back 10% of our dataset for testing * We ran comparisons in both OpenAI playground and OpenPipe evals The result: ros-spam achieved 100% accuracy vs ~80% for a frontier model, even with prompt engineering. Qualitative Assessment We shipped it to prod with a feedback loop: * Each run appears with the message in Slack as 👍/👎 * We can immediately spot and correct errors * When needed, we re-tune the model 🔃 Deployment The implementation was surprisingly painless. Accessing the model requires just a single-line change from standard GPT-4o calls, whether you're using: * the Node.js fetch API * OpenAI SDK * AI SDK * @rubriclab/agents or any other standard method. For those interested in alternatives, you could also host this on: * Fireworks * Together * OpenPipe or self-host on bare metal. Conclusion The ROI of this exercise was clear: human-level spam tagging running 24/7 for a couple hours of dev. Have questions or feedback? Drop us a message at hello@rubriclabs.com.
Read the full postBuilding an MVP: How to reconcile Breadth vs Depth
by: Sarim
Note: This is inspired from an internal memo I wrote to the team to build a prioritization framework for the launch of our product, Maige. Hope you find it relatable. As always, this is a work in progress, so feel free to email us with feedback. What is breadth? Breadth is a measure of how wide an effort is, what areas does it cover. For example, the number of pages in an app e.g: /home, /settings or number of features on a page e.g: create team, add projects, invite collaborators, handle permissions, etc. all can be used to measure breadth. In other words, breadth is quantity. It is a horizontal vector, think of it like the crown of a tree. What is depth? Depth is the entire layers a feature or page interacts with. For example, a sign in feature has a User Interface (UI), database, session, cookies, etc. All these layers form the depth. In other words, depth is quality. It is a vertical vector, think of it as the stem or roots of a tree. Building a Minimum Viable Product (MVP) Context In the past, I thought building an MVP was a combination of finding balance between breadth and depth to meet an accelerated timeline. However, I think this is the wrong approach. This approach leads to mediocre software with tonnes of tech debt, which leads to frustration, leading to slow software cycles, and eventually low motivation to continue building. Our project, Neat, is a great example of this. It became increasingly hard to add features to the app because it had decent breadth, but poor depth. In a world where software is becoming increasingly easy to make, it's more important than ever to deliver killer User Experience (UX), and good UX requires solid depth. A better approach I think the better approach is compromising on breadth to meet an accelerated timeline, but never compromising on depth. Let me explain. If we want to ship a new software by some date, we should cut down the number of pages, and features to the extent that we can deliver impeccable quality on each feature and page. We should never compromise on tech decisions or choose the easier/faster option. We should always default to the "right" way even if it takes longer. If we want to move quickly, we should compromise on the quantity by cutting down the number of pages or features. However, i understand the issue here: how deep can we really go? How deep is too deep? How deep should you go? To determine how deep is too deep, I think it's based on instinct. Ask yourself: what's the best possible way to build feature X? What's the app that does this feature best? How can I build it to this quality? And then you execute to that standard. This also needs to be guided by solid engineering principles that the entire team needs to have alignment on. For example, this Supabase doc has some really good principles that have allowed hundreds of devs to contribute to the software without compromising on quality. Some of my personal favourite principles are: * be as granular as possible * always be portable * always be extensible etc. The discussion around best engineering principles is a conversation for another time, perhaps even a blog post. But even using a certain way to organize your UI components and pages, using a specific ORM (e.g Prisma), our own ESLint extension, specifying a type of relational database schema that works really well, is allowing us to build instincts and standards around depth. Recap To recap, MVPs are optimized for speed, however, speed should never compromise depth, only breadth. It's clear to me, that if we had one brilliantly executed page or feature, it would be easier for us to test, iterate, and improve. We would have really good foundation. A secondary benefit of this approach is critically evaluating which feature/page is most important to build first. This allows us to think through our ideas a lot clearly.
Read the full postPlanning for Near-Free AI
by: Ted
Capability Open-source LLMs are thriving. Since the summer of 2023, OS LLMs have rapidly approached human-level performance across a suite of benchmarks. Line graph showing the average benchmark performance of open-source Large Language Models approaching that of GPT-4 and human-level performance, during the period from August 2023 to January 2024. *From HuggingFace Open LLM Leaderboard and OpenAI. For now, GPT-4 remains the best. What’s more, GPT-5 will come out around September 2024, according to prediction markets. We can expect the envelope to be pushed further by any jump in capabilities. Line graph showing the Metaculus prediction market for when GPT-5 will be released, ending around September 2024. From Metaculus: When will OpenAI announce GPT-5? Regardless, open-source (or at least open-access) LLMs continue to catch up. Cost Self-hosting LLMs can help to protect customer data, prevent fluctuations in ability, and build more defensible tech. Surely, this option is complicated and costly? Quite the opposite. The price for GPT-3-level models is in freefall. Line chart showing the 99.55% drop in price of GPT-3.5-turbo-level LLMs like Mixtral from 2020 to 2024. From Latent Space: The Four Wars of the AI Stack. What would you build if GPT-4 were 10x cheaper? What about 100x cheaper? 100x faster? This is not an outlandish vision but a near-inevitable trend. As we hurdle toward a future where human-level intelligence is virtually free and abundant, how can we prepare? What if every student had a genius-level tutor, every SMB had Fortune 500-level marketing resources, and every online community had Stripe-level engineering? How do we ensure that these trends serve to close digital gaps rather than widen them? Opportunity Our future is not written in stone; it's coded in bytes and shaped by every line of code we write, every application we dream up, and every conversation we have about what comes next. Whether you're looking to streamline existing processes, unlock new levels of personalization, or pioneer entirely new services, we’re happy to chat about what’s possible.
Read the full postLife and Times at Rubric Labs
by: Sarim
Hello out there! You are now entering the lively world of Rubric Labs, a place where programming knowledge compliments the subtle charm of shared conversations over cups of coffee. Join us in this journey, shall we? Here, we have a nifty little creation called Blog It, our handy slack bot, that makes blogging as easy as sipping on a latte. Ever pondered the functioning of Blog It? Well, we won’t bore you with the technical mumbo-jumbo, but in simple terms, it simplifies and automates the cumbersome task of blogging for us. And boy, are we proud of it. Always on the path of improvement, Blog It is our personal testament to this unwavering truth. What are we like at Rubric Labs, you ask? To put it mildly, we're a lively mix of tech enthusiasts exploring the dynamic oceans of challenges and thrilling victories. Be it finalizing the ideal banner image or trying to grasp the latest API, every day ushers in a new quest. That, my dear readers, is a glimpse into the world of Rubric Labs. Our goal? To make blogging feel less like a routine task and more like a joy ride. Until next time!
Read the full postAgency 101: How to Convert Leads
by: Sarim
At Rubric, we want to minimize the time we spend generating leads and converting clients, so we've worked on our Sales Pipeline very much how we would engineer a software product. It all starts with a simple contact form. Nothing extraordinary, but there are lots of subtle interactions that add to the user experience. Screenshot of Rubric's contact form. Get in touch, first name, company, email, message, submit. When a potential lead visits our website, they can click R anywhere to get started. This accelerates their time to value. Otherwise, they can always click on "Get in touch" in the header as a fallback option. The contact form is intentionally simple, yet sufficient to set context for a first conversation. We ask for first name, company, email, and a message. Alternatively, a visitor can click on our email to copy to clipboard. We find the "mailto:elon@x.com" links annoying, so this is a good compromise. We use Sonner by @emilkowalski_ for the toasts. It fits our brand aesthetic perfectly out of the box. Screenshot of Rubric's contact form after submission: a toast says email copied Once you fill out the form and hit submit, you get a confirmation of your submission. We use @nextjs Server actions to make this seamless. Screen of the contact form after submission with a message: a toast reads request submitted Now, on to the fun bits. The server action saves this data straight to our @NotionHQ "Pipeline" database with all the relevant metadata. Screenshot of Notion showing the entry created from the contact form. Elon, November 27, by Sarim Malik, status lead, etc. After it saves to Notion, it sends a message to our #sales channel in @SlackHQ and also includes the specific link to the Notion ticket. At this point, one of our team members reacts to the message with an emoji letting others know they are the account owner of this lead. Screenshot of Slack showing the message triggered by the new contact form submission: new lead, Elon, etc. They will will reply back to the request in <24 hours and cc our shared @Google inbox hello@rubriclabs.com, so everyone has visibility into the conversation. I am very pleased with how this turned out, still lots to optimize and improve. Some ideas here are inspired by how @raycastapp collects user feedback, blog linked here. As always, the code is open-source, if you want to fork or contribute. Good luck.
Read the full postMulti-Staging → Local to Prod in Record Time
by: Dexter
At Rubric, we are constantly pushing full stack features in parallel. We strive to get high quality code to prod as fast as possible with minimal coordination. We developed the multi-staging workflow to address problems with webhooks in staging that created bottlenecks across the CI/CD pipeline. TLDR; Instead of Development, Staging, Prod envs, we propose Local-[DEVELOPER], Staging-[DEVELOPER], and Prod. By setting up unique pipelines for each developer, you can move faster, testing multiple full stack features simultaneously in the cloud. Multiple branches of local-staging merging into Prod The Problem Webhooks require a static URL (breaks Vercel preview branches) A single staging branch necessitates coordination across teams Shared databases don't let you push schema changes fast enough (conflicts) The common practice is to have a development (local), staging, and production env, with slight modifications to each developer's local env that they handle manually in .env.local on their IDE. This creates conflicts: "Prisma Studio isn't working because you pushed schema" And headaches: "It works on my machine but times out in serverless (Vercel)" Multi-staging solves these problems by giving every developer their own local and staging env. Tunnelling (Local) We used ngrok to handle local environments, specifically the cloud edge domains feature. ngrok http --domain=dexter.rubric.sh 3000 Using this command I can tunnel dexter.rubric.sh → localhost:3000, allowing webhooks sent to dexter.rubric.sh to be forwarded to my local machine. This is the bones of the Local-[DEVELOPER] env. Bonus points (~/.zshrc): alias l='f(){ local port="$*"; ngrok http --domain=dexter.rubric.sh ${port:-3000}; }; f' We further give each developer a local DB and their own set of auth tokens, but we will dive into this in The Stack section. Staging Let's go further and set up a multi-lane on-ramp to prod. In addition to local, let's also give each developer their own staging env, to let them build, test and share features without coordinating. Each developer should have their own local and staging: Database AuthTokens URL GitHub branch ENV We use this workflow for Maige - a Rubric experiment that brings intelligent workflows to your issues and PRs in GitHub. If you are interested in the project, check out the codebase (OS) here. There are several useful DX commands that allow you to never have to open the .env file. See package.json. The Stack Infisical Infisical is the single source of truth for environment variables for local and staging envs. On Local: infisical --env=dexter-local export > .env.local Infisical Environments PlanetScale PlanetScale is our database. Using the safe migrations workflow, you can manage schema like GitHub - with pull requests. A separate branch for local and staging (per developer) lets us push changes aggressively without coordination. PlanetScale DB branches and schema changes Vercel Vercel is our deployment partner, automatically deploying on changes to main and [DEVELOPER]/staging Staging branches have their own url - and pull env from Infisical automatically. Vercel staging URLs and Infisical env sync GitHub (Apps) For Maige, we have a separate app for each env, which allows webhooks to be handled properly. Each developer's instance of the app works in the cloud without an active IDE. Maige GitHub Apps The Magic Once this is all set up, we get to see the true power of the workflow. Developers demoing full stack features in the cloud simultaneously. With local envs configured, I can easily work on Ted's local branch if we pair program: git checkout ted/feat && infisical --env ted-local export > .env.local && ngrok http -- domain=ted.rubric.sh With staging envs configured, I can test out Arihan's new PR (which includes schema changes, new UI and modified GitHub permissions) from my phone on the subway. No need for an IDE or even a laptop (: Arihan demoing a new feature in staging As we hurdle forward into the future, it looks like modern engineering looks a lot less like vim and a lot more like chat. With multi-staging, we can field an aggressive pace by testing full stack changes in the cloud - on a real URL. When AI agents make the leap from copilots to engineers, we might not have the time to review code. What if we can simply open a URL and test as a user? Are we ready for this velocity? As always, this is a work in progress. Please send feedback or questions to hello@rubriclabs.com. Peace nerds (:
Read the full postMy Summer at Rubric
by: Arihan
This past summer, I had the privilege of running various projects at the Lab at Rubric. The focal point of these several experiments was generative-UI, bridging capable LLMs with rich component libraries and design systems. Inspired by Vercel AI and V0, we set out to build our own fully functional genUI experience, supporting tedious dev work with intuitive DX and enhancing user interactions with dynamic but simple UX. Alright, enough high-level talk; let me actually show you some of the stuff we built, but please note that they are still a work in progress :) Rubric UI With the sharp growth of Rubric, we've realized the need for a centralized design system and component library. One that captures the sick aesthetics and feels of the Rubric brand while making initialization super simple for internal or client projects. To this end, we've built an initial version of Rubric UI to achieve just that. With just… bun i rubricui bunx rubricui init You now have access to a fully featured component library + styles, ready to use in your project! Ranging from basic components like buttons to slightly more complex ones like code blocks, markdown displays, and file uploads, Rubric UI aims to remove the headache of re-implementing components or worrying about styling and dependencies. Play around with some of the components here. Genson Moving on to a more experimental and ambitious AI-centric project, we built Genson, a system that builds complex UIs with LLMs and structured outputs. With a ton of custom-written Zod schemas, we were able to build a system that generates typesafe JSON, which is then used to create entire React components that can not only render data but also call functions and make API calls all on their own. By defining a schema for all components, we can make it very intuitive to define actions (functions, server actions, APIs) and front-end components. Users can generate full dashboards with forms, buttons, dropdowns, and more with nested components, shared data, complete customizability, and APIs with minimal errors due to the nature of typesafe JSON output. Like V0, users can refine generations with additional prompts through the chat interface on the left while viewing the generated output on the right and previous generations via the bottom sidebar. Additionally, updates to the UI are quicker due to a selective JSON update system, where only the necessary components or properties are rewritten. We're still working on defining what a "good" UI schema looks like, but we're really excited about the future of Genson and what it could mean for the future of JSON-powered UI generation. Autotune Now moving to the end of the spectrum of UI → AI, I am excited to unofficially announce our fine-tuning package, Autotune! Given an OpenAI compatible OpenAPI JSON schema, Autotune performs synthetic data generation of user prompts and AI responses to fine-tune OS models to respond to prompts in the specified structured schema. Now, you can easily bring the power of structured outputs to your projects while using the open-source models you love. Best of all, Autotune is designed to be as simple as possible to use. Through autotune init , a .env file is generated based on the user-chosen fine-tuning provider and schema.json file is made, containing an example of an OpenAPI schema. Once the relevant values are filled in and your schema is ready, then simply run autotune build and you're ready to get started! The build step spawns a series of prompts for the user to answer, specifying the data generation and fine-tuning process. Users can create a new dataset or add entries to an existing one. Once all prompts are answered, new entries are generated and added to a new or existing dataset. Then, the user can specify whether they would like to fine-tune a model via the CLI, and if the answer is yes, the user can specify which model to fine-tune, and the FT status is shown in real-time. Finally, the new model slug is outputted to the user and can be used to make API calls. Autotune is currently in the early stages of development with a limited subset of providers and quite some work to be done, but we strongly believe in making fine-tuning as easy and accessible to all developers, no matter what their level of expertise is with AI. We hope to continue working on Autotune and release it soon, so stay tuned! Some concluding statements 🫡 Working on these projects at Rubric has been an incredible experience. I've shipped my first NPM package, worked on some really cool tech with a bunch of smart and talented people, and have been challenged in so many different ways. Best of all, I can already see tremendous growth in my skills, and I'm really excited about what's in store for me next. As always, keep learning and keep shipping 🚢 Peace ✌️ - Arihan
Read the full postLaunch: Create Rubric App
by: Ted
> npx create-rubric-app We recently wrote about designing AI agents for production. Many developers shared the feedback that it’s still not clear how to actually serve the thing to users. We wanted to bring that power to all. Today, we’re excited to introduce create-rubric-app: the fastest way to bootstrap a full-stack AI project. Building on the legacy of create-react-app and create-next-app, this is our opinionated meta-meta-framework for building AI apps ready for users. We'll be adding templates for a ton of common use cases, but our first one is focused on AI agents. Why agents? Agents are a powerful new tool that unlock a whole suite of applications, like variable-depth scraping, multi-step planning, and self-healing code. However, there is a lack of guidance on how to implement them in a deployable way. Our agent is built with LangChain and OpenAI Functions. As an example, we've built a smart to-do list. The agent has access to four tools: * listTasks * createTask * updateTask * deleteTask From these tools, it can manipulate the to-do list flexibly from natural language. To try it out, visit the app in the browser and try asking the Agent to "remind me to buy bread" then "mark the bread task as complete". It should explain what it's doing as it completes these tasks. To debug, watch the logs in your terminal. How does it work? The tools above are each described by a Zod schema. The app passes the descriptions of each tool to GPT via OpenAI's API. The LLM will then decide which tool to "call", and with which inputs. By using OpenAI's Function Calling, the LLM is well-suited to outputting structured data (a common limitation of LLMs). LangChain will then parse the output and call the actual functions, passing the output back to the LLM to inform the next step. You now understand how AI agents work and you’re ready to build one for your own use case. Doing so is as simple as swapping the agent’s toolkit with your functions, API endpoints, a search engine, a memory store, or even a Discord connection (allowing agent ↔ human communication). As long as your tools return text explaining their result, the agent will do its best to work with them. Quickstart To use create-rubric-app, simply run npx create-rubric-app@latest. This will guide you through some steps to bootstrap a Next.js project and generate an OpenAI key. To run the app, simply run bun dev or npm run dev. To deploy your app and make the agent accessible to users, two options are Vercel and Railway. This project is open-source! We welcome suggestions and contributions. Most of all, we look forward to seeing what you build with this.
Read the full postLeveraging AI to create personalized video at scale
by: Sarim
Context As 2023 comes to a close, Graphite wanted to celebrate GitHub users for their contributions throughout the year. The goal was to end the year with a gift for developers to reminisce, reflect, and feel inspired for the new year. As the creators of GitHub Wrapped, a project we built in 2021 and scaled to 10k users, our team at Rubric was perfectly positioned to take this on. However, 2023 was unlike any other year. 2023 was the year LLMs became generally available. Compared to 2021, it felt like the realm of opportunities had opened wide for us and we wanted to push past static images and templated storylines, as we had done so before. Instead, we wanted to create something truly personalized, completely unique to the end user. We also wanted this to be immersive. A Year in Code was born — personalized, AI-generated video. It’s no surprise that we ended up leveraging LangChain to build this. LangChain’s out of the box helper functions helped us get to production in days, rather than weeks. Tech Stack * GitHub GraphQL API to fetch GitHub stats * LangChain & OpenAI GPT-4-turbo to generate the video_manifest (the script) * Remotion to create and play the video * AWS Lambda to render video * AWS S3 to store video * Three.js for 3D objects * Supabase for database and authentication * Next.js 13 for frontend * Vercel for hosting * Tailwind for styling * Zod for schema validation Architecture Overview Let’s summarize the architecture in a simple diagram. Flowchart showing the architecture of Year in Code. GitHub API, gpt-4-turbo, LangChain and Zod, Remotion, AWS Lambda, to video. We begin by authenticating a GitHub user using Supabase auth. Once authenticated, we fetch user-specific data from the GitHub GraphQL API, and store it in our PostgreSQL database hosted on Supabase. Supabase offers an out-of-the-box API with Row Level Security (RLS) which streamlines reads/writes to the database. At this point, we pass user stats to the LLM (gpt-4-turbo) using LangChain. Leveraging prompt engineering, function-calling & Zod schema validation, we are able to generate structured output called the video_manifest. Think of this as the script of the video. This manifest is passed to a Remotion player which allows easy embeds of Remotion videos in React apps at runtime. The manifest maps over a set of a React components. At this point, the user is able to play the video in the client and also share their URL with their friends. Next.js 13 server rendering patterns make this seamless for the end user. Additionally, the user is able to download an .mp4 file for easy sharing by rendering the video in the cloud using AWS lambda and storing the video in an S3 storage bucket. Let’s explore this in greater detail. Fetching stats When you log into the app with GitHub, we fetch some of your stats right away. These include * your most-written languages, * repositories you’ve contributed to, * stars you’ve given and received, and * your newest friends. We also fetch your total commits, pull requests, and opened issues. Check the type below to get a sense of the data we fetch. We wanted to be cognizant of scope here so we ask for the most necessary permissions, excluding any access to code. The project is also fully open-source to reinforce trust with the end user. Code snippet of the interface for stats, with username, year, etc. Generating the manifest We then pass these stats to OpenAI’s gpt-4-turbo, along with a prompt on how to format its response. Here’s the prompt: Code snippet of the prompt template: you are GitHub Video Maker, etc. Given user stats, the AI generates a video_manifest which is similar to a script for the video. The manifest tells a unique story in 12 sequences (as defined in the prompt). Assuming each sequence lasts 5 seconds, this results in a 60-second video consistently. Here we ran into a challenging problem: do we give the AI complete creative freedom or do we template as guardrails for the AI? After running some experiments, we quickly realized that in the given timeframe, we couldn’t generate high quality video by giving the AI complete creative freedom. While the output was decent and could have been improved, it wasn’t good enough to have that nostalgic moment, especially in the engineering time that we had. So instead we struck middle ground by creating a bank of “scenes” and parametrized them as much as possible. This allowed the AI to pick from a bank of scenes, pass user-specific data & generate unique text for each scene, resulting in a unique sequence of personalized frames. This was possible using OpenAI’s Function Calling which enabled the AI to output parsable text conforming to a Zod schema. The schema uses a Zod Discriminated Union (not the name of a rockband) to distinguish scenes: Code snippet of the video schema in Zod: array of objects with text and animation, etc. See the full schema here. Let’s look at a sample output video manifest. Code snippet of the video manifest: text, animation, etc. Each entry (scene) in the manifest is an object that has a text field and an animation field. The text is unique for each scene and so is the order of the scenes, whereas, the animation for each scene is picked form a bank of pre-built components. Playing the video Now the fun part: playing the actual video. This part was challenging, because we’re quite literally letting an AI direct a video we’ll trim together. From that director’s cut, we map scenes to React components, which Remotion takes to generate a video. Take a look: Code snippet of the video scene mapping in React Here, the from prop determines the first frame when this scene will appear. To generate 3D objects, we leveraged Three.js. For example, to mould this wormhole effect from a flat galaxy image, we pushed Three’s TubeGeometry to its limits with high polygon count and low radius. Flat purple nebula image with an arrow pointing to the version mapped to a 3D tunnel shape Now, we want this experience to scale by being as lightweight as possible. By saving the video_manifest instead of the actual video, we trim the bulk of the project’s bandwidth and storage by 100x. Another benefit of this approach is that the video is actually interactive. Rendering the video Since we map over a manifest in the client using React components, to download the video as .mp4, we have to render the video first. This is achieved using Remotion Lambda leveraging 10 000 concurrent AWS Lambda instances and storing the file in an S3 bucket. Each user only has to render their video once, after which we store their download URL in Supabase for subsequent downloads. This step is the most expensive in the entire process and we intentionally added some friction to this step so that only the users who care the most about sharing their video end up executing this step. Conclusion This project makes use of all the latest tech. Server-side rendering, an open-source database, LLMs, 3D, generative video. These sound like buzzwords but each is used very intentionally in this project. We hope it inspires you to build something new in 2024! Ready for takeoff? Give Year in code a try. Translate your keystrokes into stardust. Find solace in your retrospection, let others join you in your journey, and connect with starfarers alike. Your chronicle awaits. If you have feedback on this post, please reach out to us at hello@rubriclabs.com.
Read the full post