Successful is a broad term, but we have tens of thousands of apps on our platform and our SDK is running on over a billion devices. We did YC and raised multiple rounds after.
The confusion on the second point hinges in "most likely". You're likely interpreting that as the expectation of resolution time whereas they are using maximum likelyhood estimation. MLE is rather useless in this case, but it is technically still correct.
Here is a quick summary of what we launched:
1. UI Hangs: track the performance of each ViewController, Activity, and Fragment in your app.
2. Network Performance: we record the response time of all your network calls, as seen by users and show you the full round-trip with both client-side and server-side errors.
3. Execution Traces: you can define your own traces to track the performance of any logic in your app that can be a bottleneck to your users’ experience.
4. Apdex Scores: a single metric that represents your overall app quality as perceived by your users with a breakdown of satisfying, tolerable, frustrating, and crashing sessions.
5. App Launch: see how long your users are waiting from the moment they open the app until the app is fully launched and accepting touch events, across devices and OS versions.
Building this was a bit tricky for us. APM is the most data heavy and complex product we’ve built (specially that we had to switch to working remotely overnight like most of the world). And since APM is all about performance, we had to make sure to keep the footprint of our SDK as small as possible, and more importantly, to do so without sacrificing accuracy or taking shortcuts.
We’d love your feedback and please let me know if you have any questions.