Readit News logoReadit News
antome commented on Native Minecraft servers with GraalVM Native Image   github.com/hpi-swa/native... · Posted by u/fniephaus
majidazimi · 3 years ago
>Startup are a none issue

Yes it is. Developing any short term job -- that runs multiple seconds and goes away -- like lambda or k8s jobs with Java is meaningless for exactly this reason. The startup time is longer than the run time.

antome · 3 years ago
This is a Minecraft server, so it's going to be running 24/7.
antome commented on Cores of Rendering Madness: The AMD Threadripper Pro 3995WX Review   anandtech.com/show/16478/... · Posted by u/scns
antome · 5 years ago
I do enjoy how reviewers are now using software rendering of Crysis as a benchmark, even if it's just a joke. It would be pretty interesting to see what kind of game engine you could make if it was hyper-optimized around software rendering for modern, massively parallel CPUs.
antome commented on Blender 2.82   blender.org/press/blender... · Posted by u/app4soft
antome · 6 years ago
While not all the new features will be in this release, the rate of improvement for Blender's sculpting tools has been astounding. For example, the new cloth brush:

( https://twitter.com/pablodp606/status/1223663016811618307 , https://twitter.com/pablodp606/status/1223060180344147970 )

Available as an experimental build here: https://blender.community/c/graphicall/Sjbbbc/

antome commented on Standardizing OpenAI’s deep learning framework on PyTorch   openai.com/blog/openai-py... · Posted by u/pesenti
antome · 6 years ago
As someone who has used both PyTorch and TensorFlow for a couple years now, I can can attest to the faster research iteration times for PyTorch. TensorFlow has always felt like it was designed for some mythical researcher that could come up with a complete architecture ahead of time, based on off-the-shelf parts.
antome commented on We can’t trust AI systems built on deep learning alone   technologyreview.com/s/61... · Posted by u/laurex
mindgam3 · 6 years ago
> AlphaGo can play very well on a 19x19 board but actually has to be retrained to play on a rectangular board.

This right here is the soft underbelly of the entire “machine learning as step towards AGI” hype machine, fueled in no small part by DeepMind and its flashy but misleading demos.

Once a human learns chess, you can give it a 10x10 board and she will perform at nearly the same skill level with zero retraining.

Give the same challenge to DeepMind’s “superhuman” game-playing machine and it will be an absolute patzer.

This is an obvious indicator that the state of the art in so-called “machine learning” doesn’t involve any actual learning in the way it is normally applied to intelligent systems like humans or animals.

I am continually amazed by the failure of otherwise exceedingly intelligent tech people to grasp this problem.

antome · 6 years ago
This doesn't have much to do with the algorithms, and is more to do with the engineering decisions that went into AlphaGo and AlphaZero. They are designed to play one combinatorial game really well. With a bit of additional efffort and a lot of additional compute, you could expand the model to account for multiple rule / scale variations, maybe even different combinatorial games.

u/antome

KarmaCake day465December 5, 2014View Original