As someone who loves experimenting with new technologies, I recently got the Vision Pro (I’m a big fan of VR and the idea of working in mixed reality). What excites me the most is sharing the cool stuff I discover, with my friends and colleagues, like how amazing it is to have a floating 3D model in mixed reality, built with just a few lines of Swift code.
Most of my friends are on WhatsApp, and communities I follow are on forums, Reddit, or Discord. But here’s the problem: when I want to share something, it’s usually a video, and many of these videos can be hundreds of MB or even a few GB. Most platforms limit video size to ~60MB, and Reddit doesn’t allow direct video uploads, so I end up having to post a link instead. The process of resizing, cutting, and trimming videos kills the simplicity of sharing. Currently, the easiest way is to upload unlisted videos on YouTube, but that doesn’t feel right either—I want more control.
Transparency Notice: This article is sponsored by Tigris. They’ve provided resources and compensation, enabling me to explore and share this solution. Tigris is featured for its excellent fit in this use case, chosen based on its merits. I’m really glad for their support in creating content about innovative technologies. I encourage readers to explore all services mentioned to find the best fit for their projects.
The Solution
I’ve been thinking of making an app to solve this problem, mainly for myself. I don’t have grand plans of turning it into a SaaS, just something simple to share large videos with a small circle of friends or a community.
That’s how I came up with Circle—a web app where you can upload your video, process it, convert it to 1080p/720p, and share a unique link with friends so they can play it directly in their browser. Circle is more of a demo app to showcase how easy it is today to build an app like this using Phoenix LiveView and FLAME. I wanted to demonstrate that you don’t need a huge infrastructure to build something simple yet effective. Event it’s a demo app, it works well and I use it daily!
While I have ideas for turning it into a fully-fledged opensource app with proper authentication, limits, expiry features, and plug-and-play functionality, it’s not fully there yet. The current demo was put together quickly, in just a day, without tests—so it’s hacky, and not quite production-ready. If you’re interested in trying it out or self-hosting it, just be aware of it. But my idea is to build something that people can easily and cheaply (thanks to Tigris and Fly.io, more on this later) self-host.
Cloud providers – Fly.io and Tigris
There are many ways to approach this, the most complex being an autoscaling Kubernetes cluster (or even serverless) with various microservices on AWS. The issue is, I don’t have a team to manage this app, nor unlimited VC funding for AWS. It’s just something I want to run for sharing temporary videos, so the solution needs to be simple, affordable, and something I can maintain on my own.
Two major cost factors come into play here (from my own past experiences with similar projects):
- Storage: While storage itself is usually inexpensive, the real expense for video streaming is the bandwidth—the cost of transferring data.
- Processing power: Video conversion is a resource-intensive process. It’s not quite as demanding as ML training, but it still requires significant power. So, the processing must be on-demand. Since this is a self-hosted app with occasional uploads, I don’t want to spend hundreds of dollars on a machine that’s just sitting idle.
Tigris for Storage
There are plenty of storage services with similar pricing, most of them S3-compatible, meaning you can swap providers easily if needed. But Tigris is the ideal fit for this project, and here’s why:
- Technology: With Tigris, you’re dealing with a single global endpoint, which automatically distributes data to reduce latency based on access patterns. No need to worry about region-specific settings.
- Free Data Transfer: This to me is a game-changer. Tigris offers free data transfer for both uploads and downloads, no matter the region. Compare that to AWS S3, where you’re looking at around $0.09 per GB for outbound data. To put this in perspective, a 5-minute 4K video (~4GB) converted to 1080p (~500MB) and watched by 1,000 users could cost you $45 just in data transfer on S3.
- Seamless with Fly.io: Another bonus—Fly.io doesn’t charge for data transfers between its machines and Tigris. This is key because the machines that process and resize the videos need to download the original from Tigris and then upload the smaller version back, without racking up extra costs.
Running our app and ffmpeg
on fly.io
I like using Fly.io for Phoenix/Elixir applications. It’s super easy to deploy with fly launch and scales globally with a few commands. But the main reason I’m using Fly.io is because it supports FLAME (more on this later).
How all works underneath
Here’s a quick rundown of the app architecture.
Upload
Uploading directly to the server’s app would give more control, but it presents challenges. With many concurrent uploads, we’d need a machine capable of handling large data transfers and enough temporary storage space for the videos. Then, when processing them on a more powerful machine, we’d face the added complexity of transferring the files between machines.
Instead, we can set up the app so the browser uploads the file directly to Tigris using a presigned URL. This approach is quite efficient since the server’s only role is to generate the presigned URL, allowing the browser to securely upload the video straight to the storage, bypassing the need for the app to handle the transfer itself.
When presign_url/2
is called, it immediately creates a video entry in the database (this helps track uploads, even if a user doesn’t finish the upload) and generates a presigned URL using SimpleS3Upload
. This is a simple module made by Chris McCord (no additional dependencies required), which you can copy-paste from this gist.
We need a LiveView Uploader on the JavaScript side: Uploaders.Tigris
, which we specify when initializing the liveSocket
:
Once the upload is completed (or cancelled), a “save-upload” event is sent to the LiveView process, where we handle it by calling the handle_event(“save-upload”, _params, socket) callback.
FLAME and FFMpeg processing
And here’s where it gets interesting. Instead of creating a separate app or service to run on another machine, with FLAME we can execute a function on a temporary machine. FLAME communicates with Fly.io to quickly spawn a powerful machine (in just 2-3 seconds!) to handle the task. Once the function completes, the machine shuts down, so we’re not paying for idle time. If new videos come in before it shuts down, the machine stays warm and ready to handle more tasks efficiently.
We could simply call FLAME.call/1
, but that would block the process where it’s launched—like the LiveView process—until the job finishes. Instead, we can use FLAME.place_child/3
or call FLAME.cast/3
, which runs the function asynchronously. The job is linked to the calling process, meaning if the user closes the browser tab, the job stops—though that’s not always what we want (more on OBAN later).
Both generate_preview_image/1
and resize/2
use ffmpeg
to extract a frame and resize the video. Using the ffmpeg -i url
option, we can hopefully stream the video directly from Tigris to the ffmpeg
process without needing to download it first.
For tasks like this, where we need control and the ability to retry jobs (e.g., in case of network errors), we typically prefer using Oban. There’s some overlap between FLAME and Oban (Rethinking Serverless with FLAME – What about my background job processor?) since both allow us to run background jobs asynchronously, but Oban gives us more durability guarantees and better error handling. That said, FLAME is necessary for running jobs on more powerful machines, so ideally we’d use a combination of Oban and FLAME: we enqueue the video resizing job with Oban, and then it triggers FLAME to process the video on a powerful machine.
Here I’m using a performance-16x with 32gb of ram, but you can change what type of machine you want to run in the config/runtime.exs.
Make sure to configure the upper limit of machines Fly.io will spawn when initializing the FLAME pool. In the case below 10 machines and a max concurrency of 2 video processings per machine.
You’ll notice the new powerful machines appearing on Fly’s dashboard, and you’ll see they’ll only run for a few minutes.
Getting FFMpeg progress with a Collector
When processing, you’ll see the progress in real-time. Even though System.cmd
runs FFmpeg as a blocking call, I’ve used a custom FFMpeg.ProgressCollector that collects and parses the FFmpeg output in real time to track and understand the progress.
Once the app finishes processing and uploads the final video to Tigris, it renders a nice video player. I’ve been using this awesome library called video.js that I recently discovered—it’s super easy to configure. The player doesn’t download the entire file at once. Instead, it requests data in ranges via HTTP. This means if you have a long video and want to skip to the end, it should load quickly. The video URL you provide is from the app (/videos/:id/download
), which redirects the browser to a presigned Tigris download URL for the resized video. I intentionally avoid streaming videos through the app to keep costs down, taking advantage of Tigris’ free data transfer (Tigris → browser) instead of paying for Fly.io‘s data transfer costs (Tigris → app on Fly.io → browser).
If you want more control over the upload and download process, you’d have to route the files through the app, which would lead to additional data transfer costs.
Running the app
https://github.com/poeticoding/circle-demo
You first need a fly.io account, with which you can also access to Tigrisdata.com
- To create the fly.io app and the Tigris bucket is quite easy, you simply need to run the
fly launch
command in the root directory of the app. - During the app creation you need to set the Tigris bucket you want to create, and fly will do the rest for you (like setting all secrets and most of the environment variables you need).
3. The Fly deployment will likely fail because you need to set the FLY_API_TOKEN, which is the token the app requires to spawn new machines for FFmpeg processing.
To resolve this, run the following command to generate a new token and set it in the app’s secrets:
fly secrets set FLY_API_TOKEN="$(fly auth token)"
After setting the token, execute fly deploy
to rebuild and launch the app.
4. Before uploading videos, you need to set the correct CORS settings for the bucket. In the Fly app dashboard, navigate to the Tigris Object Storage menu and click on your bucket. This will open a TigrisData web page where you can configure the bucket’s settings.
Under Regions choose “Global”. I’ve also set a TTL to autodelete the files after a certain time. At the moment the app doesn’t support files auto-expire/deletion out of the box.
Remember that, by default, anyone can access any page! If you want to add basic protection to the upload page, you can set the AUTH_USERNAME
and AUTH_PASSWORD
env variables (set them as fly secrets ) to add basic authentication to the /videos/new route. It’s a simple solution. While in production I’d prefer using accounts and LiveView sessions, this basic authentication should suffice for testing the app.