Readit News logoReadit News
takantri commented on Super Mario 64 for the PS1   github.com/malucard/sm64-... · Posted by u/LaserDiscMan
giancarlostoro · 2 months ago
Interesting, I'm wondering if the GBA could handle a light version of a Minecraft style game, but the N64 looks like it could be great at it too. I need to get me a SummerCart64 one of these days and experiment with my old N64.
takantri · 2 months ago
ClassiCube (https://github.com/ClassiCube/ClassiCube) exists, which is an open-source Minecraft Classic reimplementation with an N64 port among dozens of others. HN discussed it two years ago (https://news.ycombinator.com/item?id=37518874).

ClassiCube has a WIP GBA port, but according to commits it only hits 2 FPS as of now and is not listed in its README.

On a related tangent, there's also Fromage, a separate Minecraft Classic clone written for the PS1 (https://chenthread.asie.pl/fromage/).

takantri commented on Transform your Android device into a Linux desktop   mrs-t.medium.com/transfor... · Posted by u/mikece
takantri · 3 years ago
As an alternative to the VNC method suggested here, there's also the Termux-X11 add-on[1].

In my experience, VNC is more stable, but when Termux-X11 works, it feels snappier than VNC, and the resolution of the Linux environment is automatically scaled to the available screen space you have (making portrait/landscape switches and soft keyboards feel seamless)

[1] - https://github.com/termux/termux-x11

takantri commented on MartyPC, cycle accurate IBM PC/XT emulator   vogons.org/viewtopic.php?... · Posted by u/nbaksalyar
qingcharles · 3 years ago
This is a fantastic piece of work.

Are there any cycle-accurate like this for 8-bit consoles?

takantri · 3 years ago
I know the NES has many cycle-accurate emulators[0].

[0] - https://emulation.gametechwiki.com/index.php/Nintendo_Entert...

takantri commented on State-of-the-art open-source chatbot, Vicuna-13B, just released model weights   twitter.com/lmsysorg/stat... · Posted by u/weichiang
ode · 3 years ago
Is there some single page that keeps a running status of the various LLVM's and the software to make them runnable on consumer hardware?
takantri · 3 years ago
Hi! Funnily enough I couldn't find much on it either, so that's exactly what I've been working on for the past few months: just in case this kind of question got asked.

I've recently opened a GitHub repository which includes information for both AI model series[0] and frontends you can use to run them[1]. I've wrote a Reddit post beforehand that's messier, but a lot more technical[2].

I try to keep them as up-to-date as possible, but I might've missed something or my info may not be completely accurate. It's mostly to help get people's feet wet.

[0] - https://github.com/Crataco/ai-guide/blob/main/guide/models.m...

[1] - https://github.com/Crataco/ai-guide/blob/main/guide/frontend...

[2] - https://old.reddit.com/user/Crataco/comments/zuowi9/opensour...

takantri commented on Petals: Run 100B+ language models at home bit-torrent style   github.com/bigscience-wor... · Posted by u/antman
thot_experiment · 3 years ago
Is there a easy way to run a large language model and/or speech synthesis model locally/in colab? Stable Diffusion is easily accessible and has a vibrant community around AUTOMATIC1111. It's super straightforward to run on a Google Colab. Are there similar open source solutions to LLM/TTS? I believe I had GPT2 running locally at one point, as well as ESPNET2? Not 100% sure it's been a while. Wondering what the state of the art for FOSS neural LLMS and TTS is in 2023.
takantri · 3 years ago
For LLMs, the closest thing that comes to mind is KoboldAI[1]. The community isn't as big as Stable Diffusion's, but the Discord server is pretty active. I'm an active member of the community who likes to inform others on it (you can see my previous Hacker News comment was about the same thing, haha).

Like Stable Diffusion, it's a web UI (vaguely reminiscent of NovelAI's) that uses a backend (in this case, Huggingface Transformers). You can use different model architectures, as early as GPT-2 to the newer ones like BigScience's BLOOM, Meta's OPT, and EleutherAI's GPT-Neo and Pythia models, just as long as it was implemented in Huggingface.

They have official support for Google Colab[2][3]; most of the models shown are finetunes on novels (Janeway), choose-your-own-adventures (Nerys / Skein / Adventure), or erotic literature (Erebus / Shinen). You can use the models listed or provide a Huggingface URL.

[1] - https://github.com/koboldai/koboldai-client (source code)

[2] - https://colab.research.google.com/github/koboldai/KoboldAI-C... (TPU colab; 13B and 20B models)

[3] - https://colab.research.google.com/github/koboldai/KoboldAI-C... (GPU colab; 6B models and lower)

takantri commented on OpenAI ChatGPT: Optimizing language models for dialogue   openai.com/blog/chatgpt/... · Posted by u/amrrs
doctoboggan · 3 years ago
Does anyone know why the OS community was so quickly able to replicate (surpass?) DALL-E but not GPT-3?

I would love it if I were able to run these things locally like I am with stable diffusion.

takantri · 3 years ago
I made an account to reply to this, since I tend to use KoboldAI[1][2] occasionally.

It's an open-source text generation frontend that you can run on your own hardware (or cloud computing like Google Colab). It can be used with any Transformers-compatible text generation model[3] (OpenAI's original GPT-2, EleutherAI's GPT-Neo, Facebook's OPT, etc).

It is debatable that OPT has hit that sweet spot in regards to "surpassing" GPT-3 in a smaller size. As far as I know, their biggest freely-downloadable model is 66B parameters (175B is available but requires request for access), but I had serviceable results in as little as 2.7B parameters, which can run on 16GB of RAM or 8GB of VRAM (via GPU).

There's a prominent member in the KAI community that even finetunes them on novels and erotic literature (the latter of which makes for a decent AI "chatting partner").

But you do bring up a great point: the field of OS text generation develops at a sluggish pace compared to Stable Diffusion. I assume people are more interested in generating their own images than they are text; that is just more impressive.

[1] - https://github.com/koboldai/koboldai-client

[2] - https://old.reddit.com/r/KoboldAI/

[3] - https://huggingface.co/models?pipeline_tag=text-generation

u/takantri

KarmaCake day47November 30, 2022View Original