K230 nncase FAQ#
1. Error Installing whl Package#
1.1 xxx.whl is not a supported wheel on this platform.
#
A:Upgrade pip
pip install --upgrade pip
2.Error Compiling Model#
2.1 System.NotSupportedException: Not Supported *** op: XXX
#
A: This exception indicates that the
XXX
operator is not yet supported. You can submit a feature request in the nncase Github Issue. For a list of supported operators, please refer to the***_ops.md
document in the nncase repository. IfXXX
belongs to FAKE_QUANT,DEQUANTIZE
,QUANTIZE
, etc., it indicates that the current model is a quantized model, whichnncase
currently does not support. Please use a floating-point model to compile thekmodel
.
2.2 System.IO.IOException: The configured user limit (128) on the number of inotify instances has been reached, or the per-process limit on the number of open file descriptors has been reached
#
A: Modify the value of 128 to a larger value using
sudo gedit /proc/sys/fs/inotify/max_user_instances
.
2.3 RuntimeError: Failed to initialize hostfxr.
or RuntimeError: Failed to get hostfxr path.
#
A: You need to install dotnet-sdk-7.0. Do not install it in the
anaconda
virtual environment.Linux:
sudo apt-get update sudo apt-get install dotnet-sdk-7.0
If the error persists after installation, configure the
dotnet
environment variable.export DOTNET_ROOT=/usr/share/dotnet
Windows: Please refer to the official Microsoft documentation.
2.4 The given key 'K230' was not present in the dictionary
#
A:You need to install nncase-kpu
Linux:
pip install nncase-kpu
Windows:Download the corresponding version of the whl package from the nncase GitHub tags page and then install it using pip.
Before installing nncase-kpu, check the nncase version, and then install the nncase-kpu that matches the nncase version.
> pip show nncase | grep "Version:" Version: 2.8.0 (Linux) > pip install nncase-kpu==2.8.0 (Windows)> pip install nncase_kpu-2.8.0-py2.py3-none-win_amd64.whl
3. Errors During Inference#
3.1 nncase.simulator.k230.sc: not found
#
Or the following cases:
"nncase.simulator.k230.sc: Permision denied."
"Input/output error."
A:Add the installation path of nncase to the PATH environment variable and check if the nncase and nncase-kpu versions are consistent. If they are inconsistent, install the same version of the Python package using
pip install nncase==x.x.x.x nncase-kpu==x.x.x.x.
root@a52f1cacf581:/mnt# pip list | grep nncase nncase 2.1.1.20230721 nncase-kpu 2.1.1.20230721
4. Errors During Inference on K230 Development Board#
4.1 data.size_bytes() == size = false (bool)
#
A: The input data is incorrect during inference and does not match the shape and type of the model input nodes. When configuring preprocessing parameters during model compilation, the shape and type information of the model input nodes will be updated accordingly. Please generate the input data according to the
input_shape
andinput_type
configured during model compilation.
4.2 std::bad_alloc
#
A: This is usually due to memory allocation failure. The following checks can be made:
Check if the generated kmodel exceeds the available memory of the current system.
Check if the App has memory leaks.
4.3 terminate:Invalid kmodel
#
When loading the kmodel
with the following code, this custom exception is thrown.
interp.load_model(ifs).expect("Invalid kmodel");
A: In the absence of other issues, this is due to the mismatch between the nncase version used to compile the
kmodel
and the current SDK version. Please refer to the SDK and nncase version correspondence and follow the guide to update the nncase runtime library to resolve the issue.