Show HN: Gemini can now natively embed video, so I built sub-second video search
Gemini Embedding 2 can project raw video directly into a 768-dimensional vector space alongside text. No transcription, no frame captioning, no intermediate text. A query like "green car cutting me off" is directly comparable to a 30-second video clip at the vector level. I used this to build a CLI that indexes hours of footage into ChromaDB, then searches it with natural language and auto-trims the matching clip. Demo video on the GitHub README. Indexing costs ~$2.50/hr of footage. Still-frame detection skips idle chunks, so security camera / sentry mode footage is much cheaper.
Discussion Highlights (20 comments)
ygouzerh
That's quite interesting, well done! I haven't thought of this use case for embeddings. It open the door to quite many potential applications!
dev_tools_lab
Nice use of native video embedding. How do you handle cases where Gemini's response confidence is low? Do you have a fallback or threshold?
mdrzn
Very interesting (not for a dashcam, but for home monitoring).
klntsky
why not skip the text conversion? is it usable at all?
Aeroi
very cool, anybody have apparent use cases for this?
emsign
Where is the Exit to this dystopia?
7777777phil
Today I learned that Gemini can now natively embed video.. Cool Project, thanks for sharing!
kamranjon
Does anyone know of an open weights models that can embed video? Would love to experiment locally with this.
SpaceManNabs
> No transcription, no frame captioning, no intermediate text. If there is text on the video (like a caption or wtv), will the embedding capture that? Never thought about this before. If the video has audio, does the embedding capture that too?
nullbyte
What a brilliant idea! is this all done locally? That's incredible.
simonreiff
Very impressive! A webhook could be configured to trigger an alarm if a semantic match to any category of activities is detected, and then you basically have a virtual security guard and private investigator. Well played.
macNchz
This is a really cool implementation—embeddings still often feel like magic to me. That said, this exact use case is sort of also my biggest point of concern with where AI takes us, much more so than most of the common AI risks you hear lots of chatter about. We live in a world absolutely loaded with cameras now but ultimately retain some semblance of semi-anonymity/privacy in public by virtue of the fact that nobody can actually watch or review all of the video from those cameras except when there is a compelling reason to do so, but these technologies are making that a much more realistic proposition. The presence of cameras everywhere is considerably more concerning than the status quo, to me at least, when there is an AI watching and indexing every second of every feed—where camera owners or manufacturers or governments could set simple natural language parameters for highly specific people or activities notify about. There are obviously compelling and easy-to-sell cases here that will surely drive adoption as it becomes cost effective: get an alert to crime in progress, get an alert when a neighbor who doesn't clean up after his dog, get an alert when someone has fallen...but the potential implications of living in a panopticon like this if not well regulated are pretty ugly.
danbrooks
I work in content/video intelligence. Gemini is great for this type of use case out of the box.
cloogshicer
Could this be used for creating video editing software? Imagine a Premiere plugin where you could say "remove all scenes containing cats" and it'll spit out an EDL (Edit Decision List) that you can still manually adjust.
rigrassm
I picked up a Rexing dash cam a few months back and after getting frustrated with how clunky it is to get footage of it, I decided to look into building something out myself to browse and download the recordings without having to pull the SD card. While scrolling through the recordings, I explicitly remember thinking it would be nice to just describe what I was looking for and run a search. Looking forward to incorporating this into my project. Thanks for sharing!
totisjosema
What is your experience so far with the quality of the retrieved pieces?
bobafett-9902
I wonder if the underlying improvements in visual language learning will allow for even more efficient search. The First Fully General Computer Action Model -> https://si.inc/posts/fdm1/
QubridAI
This is a big leap true multimodal search without text bottlenecks makes video querying feel finally native and insanely practical.
WatchDog
I don't quite understand the 5 second overlap. I assume it's so that events that occur over the chunk boundary don't get missed, but is there any examples or benchmarking to examine how useful this is?
thegabriele
Why just the dash cam?