ICL-Project15-ELP

Back to home

Getting Started

Anaconda

Firstly install Anaconda by following the guide here

Creating an Anaconda virtual environment

It is necessary to create a virtual environment when running a project in order to keep all your dependencies (in our case python library versions) isolated from other versions of the packages you may have.

Launch the Anaconda Prompt once you have Anaconda installed.

Start Menu Prompt

In the Anaconda prompt terminal, execute the following command to create a virtual environment, depending on which models you want to run. You only need to do this process once!

For Yolov4 or Faster R-CNN model

conda create -n TF1env python=3.7

and follow the instructions, you will be prompted to enter ‘y’ in order to proceed with the creation. Next you will need to activate your virtual environment, to do this simply enter

conda activate TF1env

and you will see (base) change to (TF1env).

You are now in the virtual environment in which to install the necessary python libraries for Yolov4 or Faster R-CNN.

For Custom Vision model

conda create -n TF2env python=3.7

and follow the instructions, you will be prompted to enter ‘y’ in order to proceed with the creation. Next you will need to activate your virtual environment, to do this simply enter

conda activate TF2env

and you will see (base) change to (TF2env).

You are now in the virtual environment in which to install the necessary python libraries for Custom Vision.

Python Dependency installation for Yolov4 or Faster R-CNN

We now need to install all the packages needed to make sure the system works as intended, this means installing the correct versions of packages used when developing the system. Luckily, this is a very simple process of commands. Please ensure you are in your virtual environment by checking for (TF1env), if you see (base) then please refer to the last code block of “For Yolov4 or Faster R-CNN model” to activate the environment.

To install the packages needed, run the following commands:

pip install pydub
pip install tensorflow-gpu==1.13.1 (if you are NOT running on gpu then pip install tensorflow==1.13.1)
pip install keras==2.0.8
pip install librosa
pip install pandas 
pip install matplotlib
pip install Pillow
pip install opencv-python
pip install ffmpeg
pip install ffprobe
pip install h5py==2.9.0

Python Dependency installation for Custom Vision

We now need to install all the packages needed to make sure the system works as intended, this means installing the correct versions of packages used when developing the system. Luckily, this is a very simple process of commands. Please ensure you are in your virtual environment by checking for (TF2env), if you see (base) then please refer to the last code block of “For Custom Vision model” to activate the environment.

To install the packages needed, run the following commands:

pip install pydub
pip install tensorflow
pip install librosa
pip install pandas 
pip install matplotlib
pip install Pillow
pip install opencv-python
pip install ffmpeg
pip install ffprobe

CUDA, cuDNN installation for Yolov4 and Faster R-CNN (Ignore if not using GPU)

To run these models on GPU, you need to have exact versions of CUDA and cudnn installed. This allows for machine learning inference to take place on your GPU, rather than CPU. Running this inference on GPU typically speeds up the inference time by around 10 times that of a similar year released CPU (GPU dependent).

Version downloads

The versions needed are:

The reason for this is because Tensorflow 1.13.1 requires certain versions of CUDA and cuDNN to run, see compatability table here.

CUDA 10.0

The installation for CUDA can be done by following the .exe installer instructions.

cuDNN 7.4

This process is a short process that must be done manually. After downloading the cuDNN files from “Version downloads”, unzip the folder and you will have a folder that looks like this:

cuDNN

These folders correspond to the folders where you have installed CUDA. In your C:\ drive (or wherever you installed CUDA), find the location where CUDA 10.0 has been downloaded, for example mine is:

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0

CUDA

Inside this folder you will find folders of the same name of the folders in your cuDNN download. Simply copy the contents of the folders into each of the corresponding folders. If there is a folder within a folder, do not copy the whole folder in, follow the folders up until the files and then copy those into the corresponding locations.

System Path check

A required check to ensure that CUDA will work properly is to check whether it has been added to the system path variables. To do this open windows file explorer and enter the properties of your PC and click “Advanced System Settings”:

Properties Advanced Properties

This will open a panel that looks like the image on the left below. Click on “Environment Variables” and it will open a new window with a section called “System variables”. It should include all the variables beginning “CUDA” like the image on the right below.

Panel Variables

If there are any missing, you can add them using the “New…” button. One thing to look out for, if the “CUDA_PATH_V10_0” is just “CUDA_PATH”, this is also fine, just ensure the “Value” leads to the correct folder in your C:\ drive (or whenever you installed CUDA).

Below is a list of the variables and values for ease of use, if needed.

CUDA_BIN_PATH %CUDA_PATH%\bin
CUDA_LIB_PATH %CUDA_PATH%\lib\x64
CUDA_PATH_V10_0 C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0 (or wherever you installed CUDA)
CUDA_SDK_BIN_PATH %CUDA_SDK_PATH%\bin\win64
CUDA_SDK_LIB_PATH %CUDA_SDK_PATH%\common\lib\x64
CUDA_SDK_PATH C:\ProgramData\NVIDIA Corporation\CUDA Samples\v10.0 (or wherever your ProgramData for the samples)

Tensorflow GPU check

Now that we have CUDA, cuDNN and Tensorflow-GPU installed we have to check whether the GPU is enabled. To do this open the Anaconda Prompt and enable python by typing

python

This should enter you into a python environment, excute the following commands in order

import tensorflow as tf
tf.test.is_gpu_available()

If this returns “True” then you have successfully installed CUDA, cuDNN and your tensorflow will run on GPU to make the model inferences. If it returns “False” please copy the error into Google, and there should be a solution!

One common error returned talks about a missing .dll file, this may be due to not copying in the cuDNN file correctly. If you have checked you have copied in the file, another possible solution is to reinstall tensorflow-gpu by running the commands below in your Anaconda environment.

pip uninstall tensorflow-gpu==1.13.1
pip install tensorflow-gpu==1.13.1