1) What do you think about hybrid approach: hypergraphs + large-scale NLP models (transformers)?
2) How far we're from real self-evolving cognitive architectures with self-awareness features? Is it a question of years, months, or it's already solved problem?
Doug Lenat is very much still active in the project. He doesn't do as much work building the ontology, but he plays a role in how various projects develop and provides a lot of feedback.
We're a data science R&D company founded in 2017 with 4 core team members: 2 co-founders, technical (CTO) and non-technical (CEO), a data scientist and software engineer, focused on large-scale NLP projects development with state-of-the-art models. We are located in Kyiv, Ukraine, with sales representatives in Canada (Toronto), United states (New York) and UAE (Dubai).
We also have proprietary product - bio-medical question-answering system, and received grant from the US for the further R&D.
Last year the income of the company reached $150k, and we're planning to make 5x in 2020.
We're looking for:
- Technical or non-technical co-founder (equity sharing) who has connections and will be able to represent us to Fortune 1000 companies.
- Technical or non-technical co-founder (equity sharing) or business-partners who has connections with investors in Palo Alto / San Francisco and will be able to represent us.
- Business partners, who interested to sell our products or services
2) How far we're from real self-evolving cognitive architectures with self-awareness features? Is it a question of years, months, or it's already solved problem?
3) Does it make sense to use embeddings like https://github.com/facebookresearch/PyTorch-BigGraph to achieve better results?
4) Why Cycorp decided to limit communication and collaboration with scientific community / AI-enthusiasts at some point?
5) Did you try to solve GLUE / SUPERGLUE / SQUAD challenges with your system?
6) Is Douglas Lenat still contribute actively to the project?
Thanks