I wish you good luck and all the best. It's a tough field but a big market. And I still think the potential is there.
I wish you good luck and all the best. It's a tough field but a big market. And I still think the potential is there.
The observation that people want what each other want is not new and doesn't require philosophical genius to observe -- "keeping up with the Joneses" is what it's called by normal people.
What about "avoiding competition is good so chase blue oceans?" That's certainly a decent fund thesis, sure. Is it a genius one? Certainly seems like the returns come from the application of the maxim and not the maxim itself.
The real insight would've been to propose a fun way out of this "mimetic hell" for society. Girard's observation is that this usually takes violence against a scapegoat -- certainly not a fun way out.
"This paper presents a new type of photonic accelerator based on coherent detection that is scalable to large (N≳106) networks and can be operated at high (gigahertz) speeds and very low (subattojoule) energies per multiply and accumulate (MAC), using the massive spatial multiplexing enabled by standard free-space optical components"
However, it still presents an interesting case for the fact that the fundamental floor on optical scaling is absolutely tiny. It'll be interesting to see who wins in this space :)
In any case, I have yet to see a conclusive, publicly explained solution to the significant system-level problems with memristor-based neural architectures, or indeed any analog neural architecture. The best claimed digital architectures are around ~250 fJ per multiply-and-accumulate (MAC) [Groq], and these generally involve 8-bit multiplication, which is extremely expensive in the analog domain thanks to the exponential scaling of power with precision levels. Even if you set aside the monstrous fabrication and device-level variance issues with memristors, DAC and ADC consume tens of pJ per sample in the realistic IP blocks that are commercially available. Although only one pair of DAC and ADC operations is required per dot product, this is still 40 fJ per MAC from DAC and ADC alone, assuming a 256x256 matrix multiplication and not taking other system-level issues into account. This limits memristors to a 5x over current digital architectures, and as nodes shrink, by the time memristors come out, this will be around a 3x. While a 3x is considerable, I don't think it justifies the moonshot-level deep tech risk that memristors will continue to represent. Many hardware companies [Tabula...] have failed attempting to reach something like a 3x in the main figure-of-merit, only to find that system-level issues get them a 1x instead. Besides, I'm sure digital architectures have more than 3x room for improvement- plenty of tricks left for digital!
I'm hoping for a breakthrough, because I am fundamentally an optimist, but memristors have been failing to deliver since 2008.
[1] http://www.cs.utah.edu/~rajeev/pubs/isca16-old.pdf [2] https://ieeexplore.ieee.org/document/7010034/
In the first category (empirical evidence),
- The discrete leap from non-LSTM RNN to LSTM network performance on NLP was essentially due to a "better factoring of the problem": breaking out the primitive operations that equate to an RNN having "memory" had a substantial effect on how well it "remembered."
- The leap in NMT from LSTM seq2seq to attention-based methods (the Transformer by Google) is another example. Long-distance correlations made yet another leap because they are simply modeled more directly by the architecture than in the LSTM.
- The relation network by DeepMind is another excellent example of a drop-in, "pure" architectural intuition-motivated replacement that increased accuracy from the 66% range to the 90% range on various tasks. Again, this was through directly modeling and weight-tying relation vectors through the architecture of the network.
- The capsule network for image recognition is yet another example. By shifting the focus of the architecture from arbitrarily guaranteeing only positional invariance to guaranteeing other sorts, the network was able to do much better at overlapping MNIST. Again, a better factoring of the problem.
These developments all illustrate that picking the architecture and the numerical guarantees baked into the "factoring" of the architecture (for example, weight tying, orthogonality, invariance, etc.) can have and has had a profound effect on performance. There is no reason to believe this trend won't continue.
In fact, there are some very interesting ways to think about the principles behind network structure -- I can't say for sure that it has any predictive power yet, but types are one intuitively appealing way to look at it: http://colah.github.io/posts/2015-09-NN-Types-FP/
Excel's dominance in the field is because it is an _application container_ that _non_ dev people can use.
The workflow is this:
- old trader guy says to his junior guy: "hey can you look into xxx."
- junior trader guy says: "sure I'll make a spreadsheet for it"
- old trader guy: "great your model is all I need, let's trade"
- several weeks later, IT guy says: "hey you're running a $100m book out of a spreadsheet, we'll make you a nice system for it, cause your stuff will blow up."
- several months later the IT guy comes back with a web app that does the same thing as the spreadsheet.
- old trader guy says: "hey I can't copy shit around, my shortcuts aren't working, I need to be able to do basic maths on the side, I can't save my work, etc."
- IT guy: "ok I'll make you an export-to-Excel button"
Seriously I've seen this happen over and over again.
The issue is not how to get rid of Excel, it's how do we make a better spreadsheet...
Shameless plug: I am a founder of AlphaSheets, a company working on solving all of these issues. It's quite scary (building a spreadsheet is like boiling an ocean) but our mission feels very meaningful, we're well-funded, and we are now stable and serving real users.
A big problem in finance workflows is that there is a tradeoff between several factors: correctness, adoption / ease-of-use, rapid prototyping, and power. We aim to solve several of these major problems. We've built a real-time collaborative, browser-based spreadsheet from the ground up that supports Python, R, and SQL in addition to Excel expressions.
Correctness is substantially addressed, because you don't need to use VLOOKUP or mutative VBA macros anymore. Your data comes in live, and you can reference tables in Python as opposed to individual cells. A lot of operational risk goes away as well, because the AlphaSheets server is a single source of truth.
We help with adoption of Python and adoption of correct systems as well. You can gradually move to Python in AlphaSheets -- many firms are trying to make a "Python push" and haven't succeeded yet because the only option is to move to Jupyter and that's too much of a disruption. It's less brittle than Excel. The important keyboard shortcuts are there.
And finally, the entire Python ecosystem of tools (pandas, numpy, etc.) and all of R is available, meaning that many pieces of functionality that had to be painstakingly built in-house in VBA and pasted around are simply available out of the box in well-maintained, battle-tested packages.
Our long term plan is to broaden our focus into other situations in which organizations are outgrowing their spreadsheets. We think there's a lot of potential with the spreadsheet interface but the Excel monopoly has prevented meaningful innovation from happening. For example, every BI solution tries to be "self-serve" and "intuitive" these days, but encounters resistance from users who end up sticking with spreadsheets due to their infinite flexibility and immediate familiar appeal.
We hope to bring the spreadsheet in line with the realities of the requirements of the modern data world -- big data, tabular data, the necessity of data cleaning, data prep / ETL, the availability of advanced tooling (stats, ML), better charting -- because we think there's a giant market of people waiting to move to a modernized but familiar spreadsheet.
If there's anyone interested, contact me, because I'd be very interested in chatting! I'm michael at alphasheets dot com :)
Would love to share notes if you're up for it!