A lot of time it feels like someone imitating noir rather than innovating with the genre and that's sad.
You can undoubtedly do it and I'd love to see some examples of a great modern noir to read.
Also the first volumes of the Berlin Noir "trilogy".
From my perspective, I think partitioning the Google+ team into their own Dark Tower with their own super-healthy cafeteria that was for them and their executives alone was the biggest problem. IMO this even foreshadows separating off Google Brain from the rest of Google and giving them resources not available to anyone else. Google was at its best a relatively open culture and 2011 is the year they killed other cultural icons such as Google Labs and (unofficially) deprecated 20% time. I think the road to the Google we see today started then. It's also the year they paid too much for Motorola and started pushing Marissa Mayer out the door.
Then there was the changing story of the 2011 bonus. When I hired in, we were all told our 2011 bonus would be tied to the success of Google+. That's a fantastic way to rally your co-workers, except... Once they launched Google+, the Google+ Eliterati (so to speak) changed their minds and announced that any Google+ bonus was for Google+ people alone. Maximum emotionally intelligent genius IMO. Now your own co-workers have been burned. Also not very "googly."
Finally, there was "Real Names." The week of its launch everyone I knew wanted an invite and I used up every single one of them and continued to do so as more were made available to me. Then "Real Names" happened and people stopped asking for invites overnight. That's the moment for me when the tide turned against this thing.
I really liked the initial Google+ UI personally, but the UI ran head-on into the nonsensical "Kennedy" initiative wherein some brilliant designer seemed to decide that since monitors are now twice the size they used to be, they should add twice the whitespace to show the same amount of information as on a much smaller screen. Subversives within the company took to posting nearly blank sheets of printer paper on walls with the single word "Kennedy" in a tiny font you'd only see if you got close to the things. That said, my godawful company man manager would repeatedly proclaim how beautiful he thought the Kennedy layout was in our office for all to hear whenever they updated GMail or Search to use it.
Of course, there are other reasons beyond my tiny perspective here, but I did have a front row seat for this and it was really disappointing to see a potential Facebook killer die of a thousand papercuts like this.
We call these things embeddings because you start with a very high dimensional space (image a space with one dimension per word type, where each word is a unit vector in the appropriate dimension) and then approximate distances between sentences / documents / n-grams in this space using a space with much smaller dimensionality. So we "embed" the high dimensional space in a manifold in the lower dimensional space.
It turns out though that these low dimensional representations satisfy all sorts of properties that we like which is why embeddings are so popular.
Doing image classification, object localization, and homography (given an input image, which of my known template images is matches it and in what orientation).
We can do better, however, and we're working on ways to leverage the hardware better (for example, if you have no data-dependent choices in your model we can enqueue kernels in parallel on all GPUs in your machine at once from a single python thread, which will perform much better than explicit python multithreading).
Stay on the lookout as we release new experimental APIs to leverage multiple GPUs and multiple machines.
In fact if you dig up the case, then even official support told me that savedmodel needs some freezing using bazel otherwise it doesn't work.
The github page and stackoverflow are full of these. If you can, please take the message to the other side :(
I don't think the cloud guys (where training will happen in distributed mode) talk to the android guys (where models will be used after quantization). There is a huge serialization problem that all of us are currently struggling with.
It has to be consistent and there has to be one way to do it.
I personally have a 10 message thread with Google cloud support on exporting a Cloud trained model to tensorflow and nobody could figure it out [Case #13619720].