We work on Deep Learning software and hardware. Our goal is to replicate the human brain in algorithms and computing devices. Our focus is on the creation of an artificial scientist using multi-modal large world models (LWM). #LLM #AI #DeepLearning #MachineLearning
Research questions: “How do we teach machines to understand and predict the 3D world we live in?”, “How do we encode and learn multi-modal data?”, “What artificial brain can support intelligent behavior in a physical environment?”, “What hardware can we use to accelerate machine learning?”
Research philosophy: We specialize in the use of computing technologies (deep learning, machine learning, artificial intelligence, AI, microchips and systems of computing devices and sensors) to extend scientific exploratory methods and measurement tools for the endeavor of understanding life and replicating it in engineered systems.
We are the pioneers in deep learning and neural networks, with more than 20 years of experience. I have worked on convolutional neural networks for vision, on LSTM and Transformers for vision, speech, text, NLP, on reinforcement learning for robotics, on AI for 3D and graphics, to name a few.
We are the de-facto leaders in hardware processors and accelerators for deep learning. I pioneered the design and developement of 5 generation of deep learning processor and accelerators from 2004-present.
We worked for more than 20 years on the design, fabrication, testing and characterization of silicon devices and circuits and systems for biomedical applications, neuromorphic engineering, silicon-on-insulator and silicon-on-sapphire.
Chapter: "Large-Scale FPGA-Based Convolutional Networks" in "Scaling Up Machine Learning", Cambridge University Press book 2011, edited by Ron Bekkerman, Misha Bilenko, and John Langford, chapter authors: C. Farabet, Y. LeCun, K. Kavukcuoglu, B. Martini, P. Akselrod, S. Talay and E Culurciello.
"Silicon-on-Sapphire Circuits and Systems, sensor and biosensor interfaces", E. Culurciello, McGraw Hill 2009.