- Bloated build system that is near impossible to get working - who even uses maven ?! Pytorch/Caffe are super-simple to build in comparison; with Chainer, it's even simple: all you need is pip install (even on exotic ARM devices).
- The benefits of all that static analysis simply aren't there. In addition, PyTorch has a jit-compiler which one can argue lets one have their cake and eat it too.
- Loops are extremely limited. Okay, we know RNN/LSTMs aren't really TF's thing, but if you venture out to do something out of the ordinary even making it batch-size invariant is difficult. There isn't even a map-reduce op that works without knowing the dimension at compile time. You can hack something together by fooling one of those low level while_loop ops, but that just tells you how silly the whole thing is.
- Bloated build system that is near impossible to get working - who even uses maven ?! Pytorch/Caffe are super-simple to build in comparison; with Chainer, it's even simple: all you need is pip install (even on exotic ARM devices).
- The benefits of all that static analysis simply aren't there. In addition, PyTorch has a jit-compiler which one can argue lets one have their cake and eat it too.
- Loops are extremely limited. Okay, we know RNN/LSTMs aren't really TF's thing, but if you venture out to do something out of the ordinary even making it batch-size invariant is difficult. There isn't even a map-reduce op that works without knowing the dimension at compile time. You can hack something together by fooling one of those low level while_loop ops, but that just tells you how silly the whole thing is.