Readit News logoReadit News
rpiguy · 5 years ago
Founded in 2019, Ascender.AI brings together powerful machine learning tools to change the way we interact with and discover information. A powerful and flexible ingest engine allows us to work with any data source.

Our goal is to transform search into an interactive experience. Our visualization allows you to drill down and refine results, and always preserves the original document. We process videos and tag them using facial recognition, translate documents on ingest, and cluster them.

A touch driven interface turns data search into true exploration.

The rendering of text in the experience is driven by Named Entity Recognition, part of speech tagging, and text summarization.

Clustering is performed by topic based on BERT embeddings in a force directed graph.

The engine is written in typescript and python. We use data from several proprietary media monitoring platforms.

News articles and videos are from several sources.

NewsAPI.org provides broad headline news coverage.

Our partner company AppTek captures many Arabic language broadcast TV news channels from satellite, which are speech recognized in Arabic, machine translated into English, and used as multilingual content for our system. We scrape many text news sources from the web.

jacobn · 5 years ago
Interesting work! AI-enhanced/powered UIs could have a lot of potential. I recently posted polygrid.com and zoomstock.com which are similar in some respects and very different in others, and also got basically crickets in response, so I wanted to give you some feedback that I hope will be useful:

1. I clicked your "Headline News" test drive link and it was all US presidential debate related content. I did see the timeline in the lower right corner, and when I tried a different random search the timeline changed substantially (from covering ~1h to several months). This made me believe that you fetch a small-ish number of items and show those and they cover whatever timespan they're from. Great. But with that few items, is there a point to having this wiz-bang interface? I can scan ~10-100 items in a list reasonably quickly. Seems like a more advanced interface like yours would be more useful with a lot more content being shown? I don't know if you could create group-bubbles (or if that would be useful), but maybe?

2. The bubble-to-bubble relationships are unclear. As a developer I get the BERT embeddings & force directed graph, but as a user all I see is a bunch of randomly distributed bubbles - understanding their relationships requires parsing them in detail, which is at odds with the presentation (= picture + text summarization).

3. I tried the "iran AND rockets" link, get 505 initial results, but clearly many fewer bubbles are shown. I'm extremely comfortable with zooming interfaces, and zoom around in yours a lot, but I don't see any more bubbles - where are the other items?

4. Bubbles work well with force-directed graphs, but not with text content. Yes, I can click to get the rounded rect, but I can move my eyes a lot faster than my mouse.

5. News is, for better or most likely worse, generally consumed in a "lean back" context (= mindless scrolling on various news feeds). Your interface is very much a "lean forward" experience - it is fundamentally premised on an active viewer taking charge. Boy do I wish we consumed news that way.

6. The videos don't really play when I click on the story. I have to then click on the little play button in the youtube player. When zoomed out the video is a little partially obscured thumbnail??

You're trying to do something very challenging. I love the creativity of your solution. Best of luck!

rpiguy · 5 years ago
Thank you, great feedback