Readit News logoReadit News
alextp · 8 years ago
You can read out more about it in the blog post ( https://research.googleblog.com/2017/10/eager-execution-impe... ) or the README ( https://github.com/tensorflow/tensorflow/tree/master/tensorf... ). This is still a preview release, so you may hit some rough edges.

Looking forward to your feedback as you try it out.

josh11b · 8 years ago
I'm on the team that worked on this -- happy to answer questions!
gormanc · 8 years ago
Hot damn this has got me all giddy. How will this work on single node multi-GPU systems? For example, with PyTorch you have to either use threading, multiprocessing, or even MPI. Can you think of a not-too-scary way to use eager execution with multiple GPUs?
chrisprobert · 8 years ago
Announcing TensorFlow's new development roadmap mandate: copy everything PyTorch is doing :-)
ychujo2 · 8 years ago
I think you mean Google is following the leadership of Chainer, like Facebook already does? PyTorch started as a Chainer fork. Its dynamic graph internals are all from Chainer.
bradleyjg · 8 years ago
This isn't art. There are no points for originality. If open source projects borrow the best parts from each other, that's a good thing.
ma2rten · 8 years ago
This is the first time I am hearing this. I though pytorch was based on torch (like the name implies). Do you have a reference or more information?
tempw · 8 years ago
Based on your reasoning PyTorch is copying TensorFlow static optimizations and production capability with JIT and ONNX then? I've seen many folks requesting an imperative API.

You can't please everybody, as if they listen or not to users people still complain. If both are making effort to improve themselves though, the community has only to benefit from this competitiveness.

make3 · 8 years ago
I'm usually against this type of framework baiting, but being a tensorflow guy myself & having just spent the week coding with pytorch full time.... this is basically identical to pytorch
brittohalloran · 8 years ago
What are the strengths and weaknesses of each? I've been using keras but planning on diving into a real deal framework next. Tensorflow is appealing for the momentum it has in the community, but pytorch looks easier to learn.

Doing image classification, object localization, and homography (given an input image, which of my known template images is matches it and in what orientation).

Deleted Comment

solomatov · 8 years ago
That's actually very good that they are copying good things from other frameworks.
make3 · 8 years ago
the question now is, are tensorflow eager's RNN as slow as pytorch's are?
yablak · 8 years ago
(I'm author of the TF rnn api & tf.contrib.seq2seq)

There's a lot of work being done on this specific part. If you have a standard RNN architecture you want to run, you can probably use the cudnn code in tf.contrib.cudnn to get a super fast implementation.

There is some performance work that needs to be done on properly caching weights between time steps of an RNN if you use a tf.nn.RNNCell. Currently if you want to implement a custom architecture, or a seq2seq decoder, or an RL agent, this is the API you would want to use. Several of the eager benchmarks are based on this API; so that performance will only improve.

I'm hopeful that for the next major release, we'll also have support for eager in tf.contrib.seq2seq.

Deleted Comment

Deleted Comment

congerous · 8 years ago
TensorFlow: everything to all people.

Eager is actually not as innocent as "open-source projects borrowing the best parts from each other", as some commenters here suggest.

Google is attempting to dominate the machine-learning API and the Python ecosystem for scientific computing.

The company that controls the API influences which apps are built on it and how. Think about how Google bundled Android services on top of Android, and how that posed an existential threat to other companies. That's what's coming for TensorFlow. Many developers are too naive to realize it, or too short-sighted to care.

tree_of_item · 8 years ago
Huh? They're attempting to dominate the machine learning ecosystem by writing a bunch of free and high quality machine learning libraries? What exactly are they doing wrong?

I wouldn't compare a permissively licensed library to Android services at all.

congerous · 8 years ago
I'm surprised I have to write this, but Google is not a charity. They are pouring commercial resources into Tensorflow for a reason. That reason is Google Cloud. Tensorflow is a Trojan horse to get people to use Google Cloud and other paid Google products. How do I know this? Because Tensorflow works better on Google Cloud than anywhere else, and Google is making a concerted effort to catch up with AWS in cloud, mostly through machine learning.

I didn't compare Tensorflow to Android services. I said that Tensorflow would serve as the basis of a service bundle, much like Android did. Let's come back in a couple years and I'll tell you I told you so.

sandGorgon · 8 years ago
Hey guys, if I could request... Please fix the serialization story for tensorflow. There 6 googleable methods to export from tensorflow and nobody knows what will work on the cloud, what can be exported from cloudml and what can be loaded on Android.

It has to be consistent and there has to be one way to do it.

I personally have a 10 message thread with Google cloud support on exporting a Cloud trained model to tensorflow and nobody could figure it out [Case #13619720].

alextp · 8 years ago
Did you try using SavedModel? It should be seamless to use downstream with tensorflow serving and it's not that hard to get estimators to spit those out.
sandGorgon · 8 years ago
I really wish. https://github.com/tensorflow/tensorflow/issues/12750

In fact if you dig up the case, then even official support told me that savedmodel needs some freezing using bazel otherwise it doesn't work.

The github page and stackoverflow are full of these. If you can, please take the message to the other side :(

I don't think the cloud guys (where training will happen in distributed mode) talk to the android guys (where models will be used after quantization). There is a huge serialization problem that all of us are currently struggling with.