Difference between revisions of "Dev:AI/Daniel"

From C3LearningLabs
(Test 1)
(Lära 2)
 
(8 intermediate revisions by the same user not shown)
Line 186: Line 186:
 
Trained network to detect stocks.<br>
 
Trained network to detect stocks.<br>
 
The network is based on ssd_inspection_v2_coco_... network, witch detect people and things.<br>
 
The network is based on ssd_inspection_v2_coco_... network, witch detect people and things.<br>
 +
 +
;Repository
 +
:Daniel: risedl78/wasp_images:detector_with_Kalman_v3
 +
:Rego: regogranlund/rise:daniel_org_detector_with_Kalman_v3  (NOT Created)
 +
:Rego: regogranlund/rise:daniel_detector_with_Kalman_v3
 +
  
 
;Pull
 
;Pull
Line 221: Line 227:
 
:<code>cd /code/research/object_detection</code>
 
:<code>cd /code/research/object_detection</code>
 
:<code>jove stop_detector_with_Kalman.py</code>
 
:<code>jove stop_detector_with_Kalman.py</code>
 +
 +
;Commit
 +
:Commit and tag
 +
 +
:<code>sudo docker commit c29b23c6905a regogranlund/rise:daniel_detector_with_Kalman_v3</code>
 +
 +
;Push
 +
 +
:<code>sudo docker push regogranlund/rise:daniel_detector_with_Kalman_v3</code>
  
 
== Lära 1 - Se sned bräda==
 
== Lära 1 - Se sned bräda==
 +
 +
;Repository
 +
:Daniel: risedl78/wasp_images:tf2_0_keras_tfhub_sherpa
 +
:Rego: regogranlund/rise:daniel_org_tf2_0_keras_tfhub_sherpa  (NOT Created)
 +
:Rego: regogranlund/rise:daniel_tf2_0_keras_tfhub_sherpa_rg
  
  
Line 262: Line 282:
 
:<code>emacs testing_keras_and_hub.py &</code><br>
 
:<code>emacs testing_keras_and_hub.py &</code><br>
 
:<code>emacs convert_saved_model_to_graphdef.py &</code><br>
 
:<code>emacs convert_saved_model_to_graphdef.py &</code><br>
 +
 +
;Commit
 +
:Commit and tag
 +
 +
:<code>sudo docker commit 06b57a5a652f regogranlund/rise:daniel_tf2_0_keras_tfhub_sherpa_rg</code>
 +
 +
;Push
 +
 +
:<code>sudo docker push regogranlund/rise:daniel_tf2_0_keras_tfhub_sherpa_rg</code>
  
 
== Lära 2 ==
 
== Lära 2 ==
Line 269: Line 298:
 
In data stored in: /dataset/
 
In data stored in: /dataset/
 
Out model stored in: /tmp/saved_models
 
Out model stored in: /tmp/saved_models
 +
 +
 +
;Repository
 +
:Daniel: risedl78/wasp_images:tf2_0_keras_tfhub_sherpa
 +
:Rego: regogranlund/rise:tf2_0_keras_tfhub_sherpa_rg_v1
  
 
;Save ''Lära 1'' session in new repro
 
;Save ''Lära 1'' session in new repro
Line 308: Line 342:
  
 
:<code>sudo docker start -i <DOCKER CONTAINER ID></code><br>
 
:<code>sudo docker start -i <DOCKER CONTAINER ID></code><br>
:Example: <code>sudo docker start -i 06b57a5a652f</code>
+
:Example: <code>sudo docker start -i c9c45638051c</code>
  
 
<br>'''Start the program''' <br>
 
<br>'''Start the program''' <br>
Line 326: Line 360:
 
:<code>emacs testing_keras_and_hub.py &</code><br>
 
:<code>emacs testing_keras_and_hub.py &</code><br>
 
:<code>emacs convert_saved_model_to_graphdef.py &</code><br>
 
:<code>emacs convert_saved_model_to_graphdef.py &</code><br>
 +
 +
;Commit
 +
:Commit and tag
 +
 +
:<code>sudo docker commit c9c45638051c regogranlund/rise:tf2_0_keras_tfhub_sherpa_rg_v1</code>
 +
 +
;Push
 +
 +
:<code>sudo docker push regogranlund/rise:tf2_0_keras_tfhub_sherpa_rg_v1</code>
  
 
== Start inferense server for Lära-2 ==
 
== Start inferense server for Lära-2 ==

Latest revision as of 12:56, 12 September 2019

Daniel































Teori

Neural architecture

Convergence

  • moving toward

Framtid

  • Bättre städad parameterhantering vid start
  • Automatisk anpassning av input och output efter kontroll av nätverks config på server, dvs det jag gjorde manuellt.
  • Batch balasering baserad på last.

Server

NVIDIA tensorflow inference server

https://devblogs.nvidia.com/nvidia-serves-deep-learning-inference/

https://github.com/NVIDIA/tensorrt-inference-server

https://developer.nvidia.com/tensorrt

Client

Threads

The architecture is based on 4 main threads.

Started the program with one terminal call, it starts all threads.


Tread 1 - PreProcess
  • Input File videostream
  • Preprocessing
  • Send to Batcher thread


Tread 2 - Batcher
  • Send to server
  • http
  • gp....


Tread 3 - Collector
  • Receives the results


Tread 4 - Presentations
  • Insert DB
  • Create video

MISC

Swift

Lagringshantering

Remote storage file system

(Ericsson vill använda)

Helena

Ljud filer converted to spectrogram video, Mojova???

Results

Search in DB to find the the events we look for.
Get the time stamp for the event.
Look at the video at that time stamp.

Projects

Daniel: risedl78/wasp_images:tf2_0_keras_tfhub_cpu_apis
Rego: regogranlund/rise:daniel_org_tf2_0_keras_tfhub_cpu_apis



Test 1

Did work!

Repository
Daniel: risedl78/wasp_images:tf2_0_keras_tfhub_cpu_apis
Rego: regogranlund/rise:daniel_org_tf2_0_keras_tfhub_cpu_apis
Login
docker login --username username
Pull
Get the container from the repository.
sudo docker pull risedl78/wasp_images:tf2_0_keras_tfhub_cpu_apis
sudo docker pull regogranlund/rise:daniel_org_tf2_0_keras_tfhub_cpu_apiss
Anslut xserver: xhost local:root
Run Container
Instantiate container
Use start if you want to cotinue to work in a created instance.
sudo docker run -e QT_X11_NO_MITSHM=1 -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix:rw -it risedl78/wasp_images:tf2_0_keras_tfhub_cpu_apis


Run the program
cd /code
python3 keras_hub_transfer_learning.py


Commit
Commit and tag
sudo docker commit af34369a969a regogranlund/rise:daniel_org_tf2_0_keras_tfhub_cpu_apis
Push
sudo docker push regogranlund/rise:daniel_org_tf2_0_keras_tfhub_cpu_apis

Test 2

Repository
Daniel: risedl78/wasp_images:inference_client_live_v9
Rego: regogranlund/rise:daniel_org_inference_client_live_v9


Pull
Get the container from the repository.
sudo docker pull risedl78/wasp_images:inference_client_live_v9
Anslut xserver: xhost local:root


Run Container
Instantiate container
Use start if you want to cotinue to work in a created instance.
sudo docker run -e QT_X11_NO_MITSHM=1 -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix:rw --runtime=nvidia -it risedl78/wasp_images:inference_client_live_v9


Run the program
cd /code/inference_client


Edit code
  • Copy files to Main Linux
sudo docker cp 74681ea6d8a0:/code/inference_client/inference_client_threaded_no_tf.py /home/rego/Daniel/inference_client_live_v9
sudo docker cp /home/rego/Daniel/inference_client_live_v9/inference_client_threaded_no_tf.py 74681ea6d8a0:/code/inference_client/


Get IP Number to computer
sudo apt install net-tools
ifconfig

172.17.0.1
10.255.129.63


Commit
Commit and tag
sudo docker commit 74681ea6d8a0 regogranlund/rise:daniel_org_inference_client_live_v9


Push
sudo docker push regogranlund/rise:daniel_org_inference_client_live_v9

Kalman Följa stock

Detects and follows stocks in intusty.
Trained network to detect stocks.
The network is based on ssd_inspection_v2_coco_... network, witch detect people and things.

Repository
Daniel: risedl78/wasp_images:detector_with_Kalman_v3
Rego: regogranlund/rise:daniel_org_detector_with_Kalman_v3 (NOT Created)
Rego: regogranlund/rise:daniel_detector_with_Kalman_v3


Pull
Get the container from the repository.
sudo docker pull risedl78/wasp_images:detector_with_Kalman_v3


Run Container
Instantiate container
Use start if you want to cotinue to work in a created instance.
Anslut xserver: xhost local:root
sudo docker run -e QT_X11_NO_MITSHM=1 -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix:rw --runtime=nvidia -it risedl78/wasp_images:detector_with_Kalman_v3


Start Container
Reopen a started container instance
List container instances: sudo docker ps -a
sudo docker start -i <DOCKER CONTAINER ID>
Example sudo docker start -i c29b23c6905a


Start the program
cd /code/research/object_detection
python3 stop_detector_with_Kalman.py


Edit code
cd /code/research/object_detection
jove stop_detector_with_Kalman.py
Commit
Commit and tag
sudo docker commit c29b23c6905a regogranlund/rise:daniel_detector_with_Kalman_v3
Push
sudo docker push regogranlund/rise:daniel_detector_with_Kalman_v3

Lära 1 - Se sned bräda

Repository
Daniel: risedl78/wasp_images:tf2_0_keras_tfhub_sherpa
Rego: regogranlund/rise:daniel_org_tf2_0_keras_tfhub_sherpa (NOT Created)
Rego: regogranlund/rise:daniel_tf2_0_keras_tfhub_sherpa_rg



Pull

Get the container from the repository.
sudo docker pull risedl78/wasp_images:tf2_0_keras_tfhub_sherpa


Run Container

Instantiate container
Use start if you want to cotinue to work in a created instance.
Anslut xserver: xhost local:root
sudo docker run -e QT_X11_NO_MITSHM=1 -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix:rw --runtime=nvidia -it risedl78/wasp_images:tf2_0_keras_tfhub_sherpa


Start Container

Reopen a started container instance
List container instances: sudo docker ps -a
sudo docker start -i <DOCKER CONTAINER ID>
Example: sudo docker start -i 06b57a5a652f


Start the program

cd /code
  • keras_hub_transfer_learning.py : Do the learning, This should we run
  • testing_keras_and_hub.py :
  • convert_saved_model_to_graphdef.py :
python3 keras_hub_transfer_learning.py


Edit code

cd /code
emacs keras_hub_transfer_learning.py &
emacs testing_keras_and_hub.py &
emacs convert_saved_model_to_graphdef.py &
Commit
Commit and tag
sudo docker commit 06b57a5a652f regogranlund/rise:daniel_tf2_0_keras_tfhub_sherpa_rg
Push
sudo docker push regogranlund/rise:daniel_tf2_0_keras_tfhub_sherpa_rg

Lära 2

Fortsättning på Lära 1, sparat i ett eget repo.

In data stored in: /dataset/ Out model stored in: /tmp/saved_models


Repository
Daniel: risedl78/wasp_images:tf2_0_keras_tfhub_sherpa
Rego: regogranlund/rise:tf2_0_keras_tfhub_sherpa_rg_v1
Save Lära 1 session in new repro

User: regogranlund
Repo: rise
Tag Name: tf2_0_keras_tfhub_sherpa_rg_v1

Commit and tag
sudo docker commit [CONTAINER-ID] [user]/[repo name]:[Tag name you want to give the image]
[CONTAINER-ID] : processes (instances) : sudo docker ps -a
Example: sudo docker commit 06b57a5a652f regogranlund/rise:tf2_0_keras_tfhub_sherpa_rg_v1
Push
sudo docker push [user]/[repo name]:[TAG]
Example: sudo docker push regogranlund/rise:tf2_0_keras_tfhub_sherpa_rg_v1


Pull

Did not need to do, was created when did do commit and tag.
Get the container from the repository.
sudo docker pull regogranlund/rise:tf2_0_keras_tfhub_sherpa_rg_v1

Run Container

Instantiate container
Use start if you want to cotinue to work in a created instance.
Anslut xserver: xhost local:root
Test xserver: xeyes
sudo docker run -e QT_X11_NO_MITSHM=1 -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix:rw --runtime=nvidia -it regogranlund/rise:tf2_0_keras_tfhub_sherpa_rg_v1


Start Container

Reopen a started container instance
List container instances: sudo docker ps -a
sudo docker start -i <DOCKER CONTAINER ID>
Example: sudo docker start -i c9c45638051c


Start the program

cd /code
  • keras_hub_transfer_learning.py : Do the learning, This should we run
  • testing_keras_and_hub.py :
  • convert_saved_model_to_graphdef.py :
python3 keras_hub_transfer_learning.py


Edit code

cd /code
emacs keras_hub_transfer_learning.py &
emacs testing_keras_and_hub.py &
emacs convert_saved_model_to_graphdef.py &
Commit
Commit and tag
sudo docker commit c9c45638051c regogranlund/rise:tf2_0_keras_tfhub_sherpa_rg_v1
Push
sudo docker push regogranlund/rise:tf2_0_keras_tfhub_sherpa_rg_v1

Start inferense server for Lära-2

Copy model
The models in Lära-2 are stored in /tmp/saved_models/
The models should be copied to /home/rego/inference_server_models/models/
sudo docker cp c9c45638051c:/tmp/saved_models/1562144167_save_model /home/rego/inference_server_models/models/


Move model to folder /1/model.savedmodel
cd /home/rego/inference_server_models/models/1562144167_save_model/
mkdir /home/rego/inference_server_models/models/1562144167_save_model/1
mkdir /home/rego/inference_server_models/models/1562144167_save_model/1/model.savedmodel
mv /home/rego/inference_server_models/models/1562144167_save_model/saved_model.pb /home/rego/inference_server_models/models/1562144167_save_model/1/model.savedmodel
sudo chmod -R 777 /home/rego/inference_server_models/models/1562144167_save_model/
sudo chown -R rego:rego /home/rego/inference_server_models/models/1562144167_save_model/


Create model configuration
Doc: https://docs.nvidia.com/deeplearning/sdk/tensorrt-inference-server-guide/docs/model_configuration.html
File Name: config.pbtxt
Path : /home/rego/inference_server_models/models/1562144167_save_model/1/model.savedmodel
sudo pico /home/rego/inference_server_models/models/1562144167_save_model/config.pbtxt
sudo chmod 777 /home/rego/inference_server_models/models/1562144167_save_model/config.pbtxt
name: "1562144167_save_model"
platform: "tensorflow_savedmodel"
max_batch_size: 8
#default_model_filename: "saved_model.pb"
input [
  {
    name: "keras_layer_input"
    data_type: TYPE_FP32
    format: FORMAT_NHWC
    dims: [224, 224, 3]
  }

]
output [
  {
	name: "production_state_detection"
	data_type: TYPE_FP32
	dims: [2]
	label_filename: "label_map.pbtxt"
  }
]
instance_group [
	       {
		kind: KIND_GPU,
		count: 1
	       }
]
Label File Name: label_map.pbtxt
Path : /home/rego/inference_server_models/models/1562144167_save_model/
sudo pico /home/rego/inference_server_models/models/1562144167_save_model/label_map.pbtxt
sudo chmod 777 /home/rego/inference_server_models/models/1562144167_save_model/label_map.pbtxt
item {
  id: 0
  display_name: "normal"
}
item {
  id: 1
  display_name: "sned"
}
Set owner and access.
sudo chmod -R 777 /home/rego/inference_server_models/models/1562144167_save_model/
sudo chown -R rego:rego /home/rego/inference_server_models/models/1562144167_save_model/


Start inference server

Models are stored at: /home/rego/inference_server_models/models

sudo nvidia-docker run --rm --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8000:8000 -p8001:8001 -p8002:8002 -v/home/rego/inference_server_models/models:/models nvcr.io/nvidia/tensorrtserver:19.04-py3 trtserver --model-store=/models

--rm : The instance is deleted after session ended.


IP Number

ifconfig

10.255.129.208

Test model generated in Lära-2

Repository
Daniel: risedl78/wasp_images:inference_client_live_v9
Rego: regogranlund/rise:daniel_org_inference_client_live_v9_RegoFix


Run Container
Instantiate container
Use start if you want to cotinue to work in a created instance.
Anslut xserver: xhost local:root
sudo docker run -e QT_X11_NO_MITSHM=1 -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix:rw --runtime=nvidia -it risedl78/wasp_images:inference_client_live_v9


Start Container
Reopen a started container instance
List container instances: sudo docker ps -a
sudo docker start -i <DOCKER CONTAINER ID>
Example sudo docker start -i 74681ea6d8a0


Copy video to container
sudo docker cp /home/rego/Daniel/woodskew/cam1_2019-05-07_17_03_50.avi 74681ea6d8a0:/code/inference_client/
Set access to video
chmod -777 /code/inference_client/cam1_2019-05-07_17_03_50.avi


Copy code to new file
cd /code/inference_client
cp inference_client_threaded_no_tf.py inference_client_threaded_no_tf_rg.py
cp FileVideoStream.py FileVideoStream_rg.py


Run
python3 inference_client_threaded_no_tf_rg.py --liveDash=False --liveStartSegment=-1 --post_to_db=False


Commit
Commit and tag
sudo docker commit 74681ea6d8a0 regogranlund/rise:daniel_org_inference_client_live_v9_RegoFix


Push
sudo docker push regogranlund/rise:daniel_org_inference_client_live_v9_RegoFix

Wasp

Pull
Get the container from the repository.
sudo docker pull risedl78/wasp_images:inference_client_live_v9
Fix xhost: xhost local:root


Run Container
Instantiate container
Use start if you want to cotinue to work in a created instance.
Run (runtime=nvidia)
sudo docker run -e QT_X11_NO_MITSHM=1 -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix:rw --runtime=nvidia -it risedl78/wasp_images:tf2_0_keras_test_v2
Run (Not runtime=nvidia)
sudo docker run -e QT_X11_NO_MITSHM=1 -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix:rw -it risedl78/wasp_images:tf2_0_keras_test_v2


Start
sudo docker start -i <DOCKER CONTAINER ID>


View all Processes
sudo docker ps -a


View all Images
sudo docker images


Copy files to container
sudo docker cp <DOCKER CONTAINER ID>:/path ./path

ID f7f904e77dec

sudo docker start -i f7f904e77dec