Readit News logoReadit News
alextp commented on Measuring Goodhart’s Law   openai.com/blog/measuring... · Posted by u/todsacerdoti
dr_dshiv · 3 years ago
What is best of n sampling? I didn’t follow. Is that like, which of these n images is best, and then turning that into a measure?
alextp · 3 years ago
Choose the top N according to the proxy objective and then use the real objective to choose the best out of those N candidates.
alextp commented on Great Noir Lives and Dies On Dialogue   crimereads.com/noir-dialo... · Posted by u/lermontov
disordinary · 6 years ago
I think it's pretty hard to write a hardboiled style noir now days without it feeling derivative. The tropes are now cliches and the dialogue can be cumbersome and hokey.

A lot of time it feels like someone imitating noir rather than innovating with the genre and that's sad.

You can undoubtedly do it and I'd love to see some examples of a great modern noir to read.

alextp · 6 years ago
Claire deWitt and the city of the dead, by Sara Gran is a good example. Very colorful language.

Also the first volumes of the Berlin Noir "trilogy".

alextp commented on Why Google+ Failed   onezero.medium.com/why-go... · Posted by u/D_Guidi
oneshot908 · 6 years ago
I was an IC6 at Google when Google+ launched...

From my perspective, I think partitioning the Google+ team into their own Dark Tower with their own super-healthy cafeteria that was for them and their executives alone was the biggest problem. IMO this even foreshadows separating off Google Brain from the rest of Google and giving them resources not available to anyone else. Google was at its best a relatively open culture and 2011 is the year they killed other cultural icons such as Google Labs and (unofficially) deprecated 20% time. I think the road to the Google we see today started then. It's also the year they paid too much for Motorola and started pushing Marissa Mayer out the door.

Then there was the changing story of the 2011 bonus. When I hired in, we were all told our 2011 bonus would be tied to the success of Google+. That's a fantastic way to rally your co-workers, except... Once they launched Google+, the Google+ Eliterati (so to speak) changed their minds and announced that any Google+ bonus was for Google+ people alone. Maximum emotionally intelligent genius IMO. Now your own co-workers have been burned. Also not very "googly."

Finally, there was "Real Names." The week of its launch everyone I knew wanted an invite and I used up every single one of them and continued to do so as more were made available to me. Then "Real Names" happened and people stopped asking for invites overnight. That's the moment for me when the tide turned against this thing.

I really liked the initial Google+ UI personally, but the UI ran head-on into the nonsensical "Kennedy" initiative wherein some brilliant designer seemed to decide that since monitors are now twice the size they used to be, they should add twice the whitespace to show the same amount of information as on a much smaller screen. Subversives within the company took to posting nearly blank sheets of printer paper on walls with the single word "Kennedy" in a tiny font you'd only see if you got close to the things. That said, my godawful company man manager would repeatedly proclaim how beautiful he thought the Kennedy layout was in our office for all to hear whenever they updated GMail or Search to use it.

Of course, there are other reasons beyond my tiny perspective here, but I did have a front row seat for this and it was really disappointing to see a potential Facebook killer die of a thousand papercuts like this.

alextp · 6 years ago
How is brain separate from the rest of google?
alextp commented on Wittgenstein’s theories are the basis of all modern NLP   towardsdatascience.com/ne... · Posted by u/ghosthamlet
perfmode · 7 years ago
Can someone ELI5 the term "embedding"?
alextp · 7 years ago
The historic picture makes a little more sense (though this is not something a 5yo would understand).

We call these things embeddings because you start with a very high dimensional space (image a space with one dimension per word type, where each word is a unit vector in the appropriate dimension) and then approximate distances between sentences / documents / n-grams in this space using a space with much smaller dimensionality. So we "embed" the high dimensional space in a manifold in the lower dimensional space.

It turns out though that these low dimensional representations satisfy all sorts of properties that we like which is why embeddings are so popular.

alextp commented on AutoGraph converts Python into TensorFlow graphs   medium.com/tensorflow/aut... · Posted by u/jbgordon
alextp · 7 years ago
I've contributed to autograph and would love to answer any questions.
alextp commented on Generative Adversarial Networks Code in PyTorch and Tensorflow   github.com/diegoalejogm/g... · Posted by u/diegoalejogm
diegoalejogm · 8 years ago
More models coming soon! :)
alextp · 8 years ago
Cool! Did you try using tensorflow's eager execution?
alextp commented on Eager Execution: An imperative, define-by-run interface to TensorFlow   research.googleblog.com/2... · Posted by u/alextp
brittohalloran · 8 years ago
What are the strengths and weaknesses of each? I've been using keras but planning on diving into a real deal framework next. Tensorflow is appealing for the momentum it has in the community, but pytorch looks easier to learn.

Doing image classification, object localization, and homography (given an input image, which of my known template images is matches it and in what orientation).

alextp · 8 years ago
I think Keras is a real deal framework. It provides a higher-level API than most other frameworks, but it has pretty sweet portability of models across frameworks and platforms and most research papers are implementable in Keras without too much trouble.
alextp commented on Eager Execution: An imperative, define-by-run interface to TensorFlow   research.googleblog.com/2... · Posted by u/alextp
gormanc · 8 years ago
Hot damn this has got me all giddy. How will this work on single node multi-GPU systems? For example, with PyTorch you have to either use threading, multiprocessing, or even MPI. Can you think of a not-too-scary way to use eager execution with multiple GPUs?
alextp · 8 years ago
We're still fairly early in the project, so for now threading is the only supported way.

We can do better, however, and we're working on ways to leverage the hardware better (for example, if you have no data-dependent choices in your model we can enqueue kernels in parallel on all GPUs in your machine at once from a single python thread, which will perform much better than explicit python multithreading).

Stay on the lookout as we release new experimental APIs to leverage multiple GPUs and multiple machines.

alextp commented on Eager Execution: An imperative, define-by-run interface to TensorFlow   research.googleblog.com/2... · Posted by u/alextp
sandGorgon · 8 years ago
I really wish. https://github.com/tensorflow/tensorflow/issues/12750

In fact if you dig up the case, then even official support told me that savedmodel needs some freezing using bazel otherwise it doesn't work.

The github page and stackoverflow are full of these. If you can, please take the message to the other side :(

I don't think the cloud guys (where training will happen in distributed mode) talk to the android guys (where models will be used after quantization). There is a huge serialization problem that all of us are currently struggling with.

alextp · 8 years ago
Ah, I didn't know SavedModel didn't work in android. I think freezing is still the way to go there? I'm sorry, I don't personally work on the mobile side of things.
alextp commented on Eager Execution: An imperative, define-by-run interface to TensorFlow   research.googleblog.com/2... · Posted by u/alextp
sandGorgon · 8 years ago
Hey guys, if I could request... Please fix the serialization story for tensorflow. There 6 googleable methods to export from tensorflow and nobody knows what will work on the cloud, what can be exported from cloudml and what can be loaded on Android.

It has to be consistent and there has to be one way to do it.

I personally have a 10 message thread with Google cloud support on exporting a Cloud trained model to tensorflow and nobody could figure it out [Case #13619720].

alextp · 8 years ago
Did you try using SavedModel? It should be seamless to use downstream with tensorflow serving and it's not that hard to get estimators to spit those out.

u/alextp

KarmaCake day1446August 23, 2007
About
http://hackernewsers.com/users/alextp.html @atpassos_ml

http://www.ic.unicamp.br/~tachard/

View Original