SingularityNET AGIX: Live Stream sa YouTube
Ang SingularityNET ay magho-host ng dalawang mini-AMA series sa YouTube sa ika-7 ng Mayo sa 5 pm UTC. Tinatalakay ng serye ang mga pinakabagong pagsulong sa pagbuo ng pinag-isang bahagi ng pag-aaral ng karanasan para sa OpenCog Hyperon, ang kanilang balangkas para sa Artificial General Intelligence (AGI) sa antas ng tao at higit pa.
Sasakupin ng unang session ang pagpapatupad ng Non-Axiomatic Reasoning System (NARS) sa MeTTa language cognitive computations ng OpenCog Hyperon at ang pagsasama ng Autonomous Intelligent Reinforcement Interpreted Symbolism (AIRIS) causality-based learning AI sa Hyperon. Ang ikalawang session ay tututuon sa muling paglikha ng karanasan sa pag-aaral sa Hyperon gamit ang Rational OpenCog Controlled Agent (ROCCA) at pag-port ng mga pangunahing sangkap na kinakailangan ng ROCCA mula sa OpenCog classic hanggang Hyperon, kabilang ang forward at backward chaining, Probabilistic Logic Networks (PLN), at pattern mining.
Ano ang AMA?
Ang AMA (magtanong sa akin ng kahit ano) ay isang karaniwang online na impormal na interactive na pagpupulong kung saan ang mga kalahok ay malayang magtanong sa mga bisita at makakuha ng mga sagot sa real time.
Session 1
- The implementation of NARS (Non-Axiomatic Reasoning System) in OpenCog Hyperon’s MeTTa language cognitive computations;
- Integrating the AIRIS (Autonomous Intelligent Reinforcement Interpreted Symbolism) causality-based learning AI into Hyperon.
Session 2
- Recreating experiential learning in Hyperon using ROCCA (Rational OpenCog Controlled Agent);
- Porting fundamental components ROCCA requires from OpenCog classic to Hyperon, including forward and backward chaining, PLN (Probabilistic Logic Networks), and pattern mining.
These advancements are part of our ongoing initiative to consolidate the strengths of several systems —ROCCA, NARS, OpenPsi, and AIRIS— to create a unified experiential learning component for Hyperon. This approach will allow AI models to:
- Develop a goal-independent understanding of their environment through causal knowledge gained from planned and spontaneous interactions;
- Explore their environment with increased efficiency using a curiosity model that prioritizes situations with high uncertainty, challenging their existing causal knowledge.
Our preliminary findings indicate that this approach surpasses common Reinforcement Learning techniques in terms of data efficiency by orders of magnitude.
To learn more, set your reminder for the livestream now on your preferred platform:
- YouTube: https://t.co/66Xm1SpDC8
- LinkedIn: https://t.co/JEwoyT7Z9R
- X: SingularityNET