Readit News logoReadit News
majestic5762 commented on Show HN: Open-Source Video Editor Web App    · Posted by u/zenkyu
majestic5762 · 2 years ago
Thoughts:

1) add ffmpeg wasm

2) use ffmpeg to detect scene changes

3) you can generate nice & short social media videos based on scene changes

majestic5762 commented on Protecting your email address via SVG instead of JavaScript   rouninmedia.github.io/pro... · Posted by u/FrostKiwi
cwillu · 2 years ago
Email is still plain-text within an xml document referenced in the page source.
majestic5762 · 2 years ago
yeah, useless stuff portrayed as smart

Dead Comment

majestic5762 commented on Memory and new controls for ChatGPT   openai.com/blog/memory-an... · Posted by u/Josely
majestic5762 · 2 years ago
mykin.ai has the best memory feature i've tested so far. ChatGPT's memory feature feels like a joke compared to Kin's
majestic5762 commented on Nvidia's Chat with RTX is an AI chatbot that runs locally on your PC   theverge.com/2024/2/13/24... · Posted by u/nickthegreek
mistermann · 2 years ago
I'd like something that monitors my history on all browsers (mobile and desktop, and dedicated client apps like substance, Reddit, etc) and then ingests the articles (and comments, other links with some depth level maybe) and then allows me to ask questions....that would be amazing.
majestic5762 · 2 years ago
rewind.ai
majestic5762 commented on Nvidia's Chat with RTX is an AI chatbot that runs locally on your PC   theverge.com/2024/2/13/24... · Posted by u/nickthegreek
tuananh · 2 years ago
this is exactly what i want: a personal assistant.

a personal assistant to monitor everything i do on my machine, ingest it and answer question when i need.

it's not there yet (still need to manually input url, etc...) though but it's very much feasible.

majestic5762 · 2 years ago
mykin.ai is building this with privacy in mind. Runs small models on-device, while large ones in confidential VMs in the cloud.
majestic5762 commented on LLM in a Flash: Efficient LLM Inference with Limited Memory   huggingface.co/papers/231... · Posted by u/ghshephard
dwd · 2 years ago
Apple was running a model double the size of the available memory. Not sure if that was a sweet spot they found or if you could sacrifice response time to run even bigger models.

The paper is worth a read in full as what they are doing is pretty cool:

https://arxiv.org/pdf/2312.11514

Highlight from the paper...

"Then, we introduce two complementary techniques to minimize data transfer and maximize flash memory throughput:

• Windowing: We load parameters for only the past few tokens, reusing activations from recently computed tokens. This sliding window approach reduces the number of IO requests to load weights.

• Row-column bundling: We store a concatenated row and column of the up-projection and down-projection layers to read bigger contiguous chunks from flash memory. This increases throughput by reading larger chunks."

majestic5762 · 2 years ago
I was thinking a couple of days ago about this concept of windowing for LLMs, but I lack the technical skills to implement it. Now Apple just published a paper on it. This is what I call synchronicity
majestic5762 commented on Pipe Dreams: The life and times of Yahoo Pipes   retool.com/pipes... · Posted by u/type0
majestic5762 · 2 years ago
Yahoo Pipes was full of XSS vulns. I used to play with cookie grabbers
majestic5762 commented on Show HN: Recompyle – A JavaScript developer-friendly console / debugger   recompyle.com/... · Posted by u/dam11
majestic5762 · 2 years ago
You just pretty print console logs at breakpoint location. IDE extension + some web app. Why closed source lol. I won't pay for it, could build my own. Sorry but you first need open source traction and then try to sell to enterprises. Good luck

u/majestic5762

KarmaCake day84June 1, 2022View Original