The Kinara Ara-1 AI processor, with its patented Polymorphic Dataflow Architecture, enables applications to run multiple AI models with zero overhead context-switching while minimizing data traffic and significantly reduce power consumption. The processor delivers a significant advantage in performance/$ and performance/Watt over general-purpose GPU’s, making it ideal for applications in like Smart Cities, Smart Retail, Automotive, and Industry 4.0.
Unrivaled Software Tools
The Kinara Software Development Kit (SDK) combined with Ara-1 provides unmatched ease of use, flexibility, and insights to deploy AI at the edge. Deep Vision provides Linux and Windows drivers to support runtime communication between most Linux- or Windows-based host systems and Ara-1.
Increase Performance by Offloading Inferencing
While the host system performs all pre- and post-processing functions, the Ara-1 Edge AI Processor handles the application’s end-to-end inferencing requirements. It is optimized for a batch size of ‘1’ and delivers both real-time responsiveness and low latency. For simultaneous multi-model support, the Ara-1 Edge AI Processor provides the ability to run multiple models simultaneously without sacrificing performance. Ara-1 processor scales from endpoints to edge servers – connect multiple Ara-1 processors to a host for a linear increase the AI performance.