Q1: Will we have access to the documentation? Schematics, layout etc ? If yes, how detailed will it be?
A1: The user manual is accessible. Design documents for standard products are generally not open to customers, but for custom products where customers pay NRE fees, design documents are open to customers.
Q2: How many GPIOs are free for customization?
A2: Currently we have one GPIO port per channel exposed for programming (S4/S16 will have 4/16 GPIOs). However, we can customize based on your need to as much as 8 GPIOs per camera channel and also customize the front panel to expose these GPIOs per your requirement.
Q3: SoC is from Intel/Movidius ?
A3: S series used intel movidius, R- Plus series used Nvidia Orin.
Q4: 17MP 30FPS is raw footage or compressed? If compressed, what encoders do we use?
A4: Capturing the raw data from the sensor into ECFG can be at this speed. However, ECFG also supports demosaic and conversion of the format to YUV, with MJPG encoding compression when sending out to the host PC.
Q5: Connection to ECFG is only via USB ?
A5: For S series only via USB. For R-plus could use USB, Ethernet, WiFi, 4G.
Q6: How customizable is the POC ? Is there a way to create scripts to run tests in an automated way?
A6: Our POC interface supports voltage setting by calling the ECFG SDK in the range of 4V-16V. It also supports switching control of output voltage ON/OFF. The test case you describe can be implemented by calling our Python version SDK, we have similar reference sample code. At the same time, the UGrab App also supports long-term aging testing, but it is not currently linked to temperature. We can share the SDK document by the time we provide the evaluation unit.
Q7: Is it possible to deactivate POC?
A7: Yes, it supports switching control of output voltage ON/OFF.
Q8: Do we have reverse voltage protection ?
A8:
1. For Device's Input power, yes we have reverse protection function.
2. For POC and extended GPIOs, these are output signal, there is no reverse protection. But these interfaces are designed with overcurrent protection.
Q9: Is it possible to write register list directly in I2C directly on the chip and have a click on the UGrab SW which runs that, excluding the USB protocols and the communication with Windows?
A9: Currently only one configuration list is pre-loaded to the chip, but it is feasible to pre-load multiple configuration lists to the chip, and we can update the SW so that the users can have a click on the UGrab SW to switch between different configuration lists.
Q10: What access will we get on the SoC ?
A10: As a general feature, we provide the host side SDK for customer integration, and the average user does not need to program directly on the SOC. If you do need to program on the SOC, we’ll need to share you the source code and intel MDK under NRE.
Q11: Is it possible to get 30/60fps on a lower resolution <4K ? Scale down by ECFG.
A11: Yes. But I don't know if after scaling, do you want to store the data locally or do you want to preview the output to the host?
At present, we have supported the S-series, which converts to YUV after demosaic , and outputs to the host computer through USB after scaling down.
If your requirements are not consistent with this case, we only need to carry out software iteration support, and the hardware performance has already supported your demands.
Q12: How does the streaming in UGrab works? Is it raw footage ? Is it compressed?
A12: For the S Series, we support two modes of operation:
1. Default mode. ECFG collects Raw data and transmits Raw data to UGrab via USB3.2. UGrab performs post-processing, demosaics, converts to YUV and displays.
2. Optional mode. ECFG collects Raw data, demosaics it on the ecfg device, converts it to YUV, and transmits the YUV data to UGrab for receiving and display.
Q13: Does UGrab have a function for screen recording/grabbing footage? We don't necessarily need raw footage to be recorded, but we need it somehow recorded.
A13: Could support.
ECFG-R2 Plus
Eyecloud.ai is the leading supplier of AI vision appliances and systems, aiming to support tech companies in overcoming the development and production challenges of Edge AI vision products with expertise in advanced hardware design and production, camera and machine vision systems development, image sensor and ISP tuning, and IoT device management. Eyecloud.ai has successfully developed mass-production machine vision solutions for customers around the globe in autonomous driving, electrical vehicles, mobility robots, and surveillance markets. Eyecloud.ai offers engineering services to enable customization and meet unique application requirements in rapid and cost-effective manner. Founded in 2018, Eyecloud.ai has received several industry awards for its insight and innovations in AI vision product deployment.
Click here to see our products.
Our website: https://www.eyecloudai.com/