I’m guessing some people already have these capabilities integrated into terminal workflows and I’d love to see a demo/setup.
the founding of Netscape occurred at the same time I was deciding where to go in industry when I left Berkeley in 1994. Jim Clarke and Marc Andreessen approached me about the possibility of my joining Netscape as a founder, but I eventually decided against it (they hadn't yet decided to do Web stuff when I talked with them). This is one of the biggest "what if" moments of my career. If I had gone to Netscape, I think there's a good chance that Tcl would have become the browser language instead of JavaScript and the world would be a different place! However, in retrospect I'm not sure that Tcl would actually be a better language for the Web than JavaScript, so maybe the right thing happened.
Too humble Dr. Ousterhout! It would have been a far better language. By incorporating DyT, Transformers without normalization can match or exceed the performance of their normalized counterparts, mostly without hyperparameter tuning.
Getting to 7 is notoriously difficult
What am I missing?2 * 2 * 2 - 2/2
Seriously, have you ever heard them say anything bad about Elon, Trump, or the Republican agenda even one time in the last year? We know that none of those entities are perfect, they know that none of these entities are perfect, and yet they dare not say it. This is same as having politicians that are bought, and it’s maddeningly frustrating how two faced the whole ordeal is. The hosts will look at you directly in your eyes and tell you they’re being honest, and they just want to know the truth like you do. But the lie is hiding in plain sight, and everyone sees it.
What’s the point of it all? They're so narcisstically working and using the show to further their own interests. Especially sacks for whom it paid off handsomely. And they do this at the direct expense to their audience who (many of whom may not know better), doing their part to build a more morally confused and bankrupt nation in the process.
I still tune in every week, to happily outrage myself the convolution and lies and insane double standards that they get find themselves in in each conversation. I don't know why I do, I guess there's no other show that I find discusses politics and tech otherwise in a way that's satisfying. I wish a moderate panel (or even a liberal) one existed to counterbalance all of their BS but I haven't found that.
https://docs.google.com/document/d/1iUscaSy6HHLVz2e2rjQfQtx5...
There are some good reasons why global-consensus cryptocurrencies can't generally be used as money:
https://perry.kundert.ca/range/finance/holochain-consistency...
a basket of basic, thickly traded commodities should be chosen; perhaps a basket of specific amounts of basic elements, thermal and electrical energy, and basic food commodities, priced as delivered to several large markets.
and A control algorithm such as the PID loop used in process control or robotics is employed to adjust K over time.
At a minimum, those aspects of this currency would need to be flexible such that they can be adjusted over time, as needed, to maintain the currency's stability.
An inflexible scheme seems like it would be doomed to failure. And yet, any tinkering could also be its demise and undermine its stability and faith in the system. It's a delicate balance.Ultimately money is a social construct based merely on shared belief. Algorithms can used to enhance and support this social construct, but I do not see how it could wholly replace human/social interventions.
We’ve maintained a financial exchange w/ margining for 8 years with it, and I guarantee you that everyone was more than relieved - customers and employees alike, once we were able to lift and shift the whole thing to Java.
The readability and scalability is abysmal as soon as you move on from a quant desk scenario (which everyone agrees, it is more than amazing at.. panda and dask frames all feel like kindergarten toys compared), the disaster recovery options are basically bound to having distributed storage - which are by the way “too slow” for any real KDB application given the whole KDB concept marries storage and compute in a single thread.. and use-cases of data historical data, such as mentioned in the article, become very quickly awful: one kdb process handles one request at once, so you end up having to deploy & maintain hundreds of RDB keeping the last hour in memory, HDBs with the actual historical data, pausing for hourly write downs of the data, mirroring trees replicating the data using IPC over TCP from the matching engine down to the RDBs/HDBs, recon jobs to verify that the data across all the hosts.. Not to mention that such a TCP-IPC distribution tree with single threaded applications means that any single replica stuck down the line (e.g. big query, or too slow to restart) will typically lead to a complete lockup - all the way to the matching engine - so then you need to start writing logic for circuit breakers to trip both the distribution & the querying (nothing out of the box). And then at some point you need to start implementing custom sharding mechanisms for both distribution & querying (nothing out of the box once again..!) across the hundreds of processes and dozens of servers (which has implications with the circuit breakers) because replicating the whole KDB dataset across dozens of servers (to scale the requests/sec you can factually serve in a reasonable timeframe) get absolutely batshit crazy expensive.
And this is the architecture as designed and recommended by the KX consultants that you end up having to hire to “scale” to service nothing but a few billions dollars in daily leveraged trades.
Everything we have is now in Java - all financial/mathematical logic ported over 1:1 with no changes in data schema (neither in house neither for customers), uses disruptors, convenient chronicle/aeron queues that we can replay anytime (recovery, certifying, troubleshooting, rollback, benchmarks, etc), and infinitely scalable and sharded s3/trino/scylladb for historical.. Performance is orders of magnitude up (despite the thousands of hours micro-optimizing the KDB stack + the millions in KX consultants - and without any Java optimizations really), incidents became essentially non-existent overnight, and the payroll + infra bills got also divided by a very meaningful factor :]
And this is the architecture as designed and recommended by the KX consultants that you end up having to hire to “scale”
I think this hits on one of the major shortcomings of how FD/Kx have managed the technology going back 15+ years, IMHO.Historically it’s the consultants that brought in a lot of income, with each one building ad-hoc solutions for their clients and solving much more complicated enterprise-scale integration and resilience challenges. FD/Kx failed to identify the massive opportunity here, which was to truly invest in R&D and develop a set of common IP, based on robust architectures, libraries and solutions around the core kdb+ product that would be vastly more valuable and appealing to more customers. This could have led to a path where open sourcing kdb+ made sense, if they had a suite of valuable, complementary functionality that they could sell. But instead, they parked their consultants for countless billable hours at their biggest paying customer’s sites and helped them build custom infra around kdb+, reinventing wheels over and over again.
They were in a unique position for decades, with a front row seat to the pain points and challenges of top financial institutions, and somehow never produced a product that came close to the value and utility of kdb+, even though clearly it was only ever going to be a part of a larger software solution.
In fairness they produced the delta suite, but its focus and feature set seemed to be constantly in flux and underwhelming, trying to bury and hide kdb+ behind frustratingly pointless UI layers. The more recent attempts with Kx.ai I’m less familiar with, but seem to be a desperate marketing attempt to latch onto the next tech wave.
They have had some very talented technical staff over the years, including many of their consultants. I just think that if the leadership had embraced the core technology and understood the opportunity to build a valuable ecosystem, with a goal towards FOSS, things could look very different. All hindsight of course :)
Maybe it’s not too late to try that…
Get a free version out there that can be used for many things…
I think this has been the biggest impediment to kdb+ gaining recognition as a great technology/product and growing amongst the developer community.Having used kdb+ extensively in the finance world for years, I became a convert and a fan. There’s an elegance in its design and simplicity that seems very much rooted in the Unix philosophy. After I left finance, and no longer worked at a company that used kdb+, I often felt the urge to reach for kdb+ to use for little projects here and there. It was frustrating that I couldn’t use it anymore, or even just show colleagues this little known/niche tool and geek out a little on how simple and efficient it was for doing certain tasks/computations.