Readit News logoReadit News
seiji commented on Lisp Flavoured Erlang 1.0 released after 8 years of development   github.com/rvirding/lfe... · Posted by u/vdaniuk
dang · 10 years ago
Please keep programming language flamewars off HN.
seiji · 10 years ago
The first part was a question and the second two parts were examples.

Writers can't be held responsible for the lack of reading comprehension exhibited by audience members.

seiji commented on Stargate Physics 101: A comedy about the importance of software testing   archiveofourown.org/works... · Posted by u/gwern
Someone1234 · 10 years ago
I agree that it is a play on that...

However in Stargate the actual 38 minutes thing isn't a computer limitation it is a physical one. In the original show (i.e. SG1) Carter said it is "impossible" due to physics for it to stay open longer, and they also established that with enough raw power (e.g. blackhole, ancient device, etc) it could be kept open longer or near indefinitely, it just wasn't possible with the normal power the Stargate needs to operate.

Keep in mind they did exceed 38 minutes many times with different explanations each time[0] none of which were computer related. It is also worth noting that earth in the show built their own DHD which presumably would have different computer limitations than the ancient built DHD (and as they established when all DHDs in the universe died except theirs due to the malware).

[0] http://stargate.wikia.com/wiki/Stargate#Exceptions

seiji · 10 years ago
isn't a computer limitation it is a physical one.

Assumes facts not in evidence. That could easily be a firmware (on-gate) vs. software (DHD) argument.

Carter said it is "impossible" due to physics

Potentially unreliable narrator.

also established that with enough raw power (e.g. blackhole, ancient device, etc) it could be kept open longer

Yup, cited above.

built their own DHD which presumably would have different computer limitations

firmware vs. software

when all DHDs in the universe died except theirs due to the malware

you're not doing devops unless you have one button that can destroy your entire infrastructure

Deleted Comment

seiji commented on Ask HN: Did anyone notice GitHub removed project search from index page?    · Posted by u/dragonsh
girkyturkey · 10 years ago
Unfortunately, I believe that this is the case. GitHub was once a great place where signing up was your choice but it looks like GitHub has fallen prey to the "We need to get more signups" fad that a lot of companies are following.
seiji · 10 years ago
Protip: don't take hundreds of millions of dollars from VCs if you care about your users.
seiji commented on Stargate Physics 101: A comedy about the importance of software testing   archiveofourown.org/works... · Posted by u/gwern
jdaley · 10 years ago
We eventually tracked this down to some of the stabilisation algorithms – they work fine within the intended time frames, but there's an inefficiency in them that gets worse the longer the wormhole is open and eventually they can't keep up with the shifts.

An allusion to the accumulated rounding bug that caused the Patriot missile incident in 1991? That system was likewise intended to operate for only short periods.

From https://autarkaw.wordpress.com/2008/06/02/round-off-errors-a...

In the Patriot missile, time was saved in a fixed point register that had a length of 24 bits. Since the internal clock of the system is measured every one-tenth of a second, 1/10 expressed in a 24 bit fixed point register is 0.0001100110011001100110011 (the exact value is 209715/2097152).

On the day of the mishap, the battery on the Patriot missile was left on for 100 consecutive hours, hence causing an inaccuracy of 9.5E-8x10x60x60x100=0.34 seconds.

The shift calculated in the range gate due to the error of 0.342 seconds was calculated as 687m. The shift of larger than 137m resulted in the Scud not being targeted and hence killing 28 Americans in the barracks of Saudi Arabia.

seiji · 10 years ago
Nice comparison!

But, more likely, it's just a play on a reason a Stargate (when unaided by some hostile energy source or time dilation field) just shuts off after 38 minutes with no explanation.

seiji commented on Lisp Flavoured Erlang 1.0 released after 8 years of development   github.com/rvirding/lfe... · Posted by u/vdaniuk
StreamBright · 10 years ago
This is a feature to me more like a limitation. I do not like optional arguments &rest style. In Erlang you need to have explicit number of arguments and need to explicitly implement them. I really like that. Keep in mind that default values are trivial to implement when the smaller arity function calls the higher arity one with added values. Or you can call in yourself to the higher arity function and specify the parameters yourself. In my opinion this is elegant and safe way of dealing with the problem.
seiji · 10 years ago
In Erlang you need to have explicit number of arguments and need to explicitly implement them.

Or, you know, just use lists as parameters. That's the standard Erlang pattern for unknown parameters lengths (e.g. io:format("debug ~s because ~p~n", [SomeString, SomeType]))

Dead Comment

Dead Comment

seiji commented on Collections-C, generic data structures for C   github.com/srdja/Collecti... · Posted by u/marxo
eps · 10 years ago
Nice, but it's rather naive and through that - needlessly wasteful.

It's C.

Why on Earth you'd want to allocate a separate list node for each piece of data when you can embed this node directly into the data and then use container_of or similar offsetof() derivative to get a pointer to the data by a pointer to a list item? Saves you at least sizeof(void*) per item and eliminates a chance of list_add ever failing among other things.

The same goes for custom allocators - why would you drag around pointers to malloc, calloc and free, when all you need is just a pointer to realloc?

This sort of thing. The code is nice, but it's not how one would write a container library after few years of hands-on C experience.

seiji · 10 years ago
allocate a separate list node for each piece of data

So many programmers don't even consider "overhead" of data when writing things. But, most of the time it doesn't matter. Do you need a list of ten things? Great. Do whatever. Do you need a list of a billion things? Then you need to rethink everything from the bottom up.

when all you need is just a pointer to realloc

Wrong! https://github.com/Tarsnap/libcperciva/commit/cabe5fca76f6c3...

seiji commented on AlphaGo shows its true strength in 3rd victory against Lee Sedol   gogameguru.com/alphago-sh... · Posted by u/luu
partycoder · 10 years ago
Skeptics have said that achieving this milestone would not happen within this decade, our lifetimes or even ever. It happened yesterday.

It took Lee Sedol many decades of his life to train to achieve this level. And his ability to pass on his skills is limited. Now that a computer has achieved this level, the state of the neural network behind it can be serialized and ran into an unlimited number of computers and have millions of systems that are more proficient at Go than the best player in the world.

People have said that achieving the cognitive level of the human brain requires to match its computational power. But if you take out all the parasympathetic and motor boilerplate, what is actually left for mental tasks is much less from that, most of that power is not even recruited for higher level mental tasks. That lowers the bar for strong AI.

Then, strong AI can be immortal, and never physically deteriorate from aging. Strong AI can multiply infinitely and communicate at a rate that would be equivalent to writing millions of books in a second. It could transfer all its knowledge in seconds. It can also recursively improve itself. This advantage will lower the bar for strong AI even more.

seiji · 10 years ago
strong AI can be immortal

My grandfather was very racist and never changed as long as he lived. I'm glad most people aren't immortal. There's no guarantee a full AI wouldn't be tempted by evil and just become republican (or worse, a VC).

It can also recursively improve itself.

So can people (the more you know, the more you can learn), but most don't. It's important to remember "intelligence" isn't an abstract concept—intelligence is also embodied in personality—and personalities have wishes and goals and desires and loves and hates and that one song they can't get out of their head. A true "strong AI" will be fully conscious, not just algorithmic function bating.

Good luck telling a mildly strong godform to stop tripping on youtube videos and instead solve the global economic stability equation over lunch.

This advantage will lower the bar for strong AI even more.

That's kinda foofy conjecture. Being good at rectangular grid outcomes isn't necessarily a step in any direction towards a hands-off tax evaluating robot.

It feels really really good to talk about how AI will be a hundred billion trillion times smarter than the combined brainpower of all humans that have ever lived, but it feels good in the same way thinking dead people live again after they die feels good—it triggers that warm wishful thinking parietal lobe that removes a bit of reason for the sake of an overarching calmness.

Enthusiasm is great, but tempering with real expectations and less technopriesthood is better.

u/seiji

KarmaCake day8711February 19, 2007
About
matt@matt.sh will work if your email doesn't look like spam.

subscribe to my newsletter: newsletter@matt.sh

Nobody visits HN anymore. It's too crowded.

https://github.com/mattsta

https://matt.sh/

https://matt.io/

What do you do?

View Original