Tensorflow and C++

Tensorflow is a powerful and well designed Tool for neural networks. The Python API is well documented and the start is pretty simple. On the other hand the documentation of the C++ API is reduced to a minimum. This tutorial shows you how to:

  • Build and train an easy graph in Python
  • Freeze a graph and run it in C++

In this tutorial we will work with bazel, Google’s own build tool. If you prefer to work without bazel, check out how to get Tensorflow going without bazel here. As an example we will use the world’s smallest net. It just consists of one input neuron and one output neuron. The net is shown below.
The training goal is to get the same value for the output and the input. That doesn’t make any sense but it is just an example. The loss function will be the squared error.

Requirements
  • Install Bazel
  • Install Tensorflow. It is not necessary to install TensorFlow from sources.
  • Clone the TensorFlow repository
  • git clone –recursive https://github.com/tensorflow/tensorflow

Freeze a Graph

This is the easiest way to load a File in C++. The way Tensorflow saves models is a little bit confusing at the beginning. First we have the the graph definition in the Graph.pb file. That’s the layout of the graph. It doesn’t include the values of the Variables, like weights and biases. Those are saved in the Checkpoint files. Freezing the Graph means, that we combine those two files in one. All Variables will be transformed to constants. A more detailed documentation can be found on the Tensorflow Site.
First of all we build, run and train the graph in python. You can also find all Source Files here.
Note: This file is written for TensorFlow Version 1.0. If you use an older version change tf.multiply to tf.mul and tf.subtract to tf.sub.

#!/usr/bin/env python
#WorldSmallestNet

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow.python.tools import freeze_graph

import tensorflow as tf
import numpy as np
import random, os, shutil

#Define Graph
with tf.Graph().as_default():
	#Placeholder
	X1 = tf.placeholder(tf.float32,[None,1],name="Input")
	Y_ = tf.placeholder(tf.float32,[None,1],name="Label")
	#First Layer
	with tf.name_scope('Layer1'):
		w1 = tf.Variable(tf.constant(0.),name="Weights")
		b1 = tf.Variable(tf.constant(0.1),name="bias")
		X2 = tf.nn.tanh(tf.multiply(X1,w1)+b1)
		tf.summary.scalar("w1",w1)
		tf.summary.scalar("b1",b1)
	#Second Layer
	with tf.name_scope('Layer2'):
		w2 = tf.Variable(tf.constant(0.),name="Weights")
		b2 = tf.Variable(tf.constant(0.1),name="bias")
		Y = tf.nn.sigmoid(tf.multiply(X2,w2)+b2, name="Output")
		tf.summary.scalar("w2",w2)
		tf.summary.scalar("b2",b2)
	#Define Loss and Training Step
	with tf.name_scope('Train'):
		Loss = tf.sqrt(tf.abs(tf.reduce_mean(tf.subtract(Y_,Y))),name="Loss")
		train_step = tf.train.AdamOptimizer(0.01).minimize(Loss)
		tf.summary.scalar("Loss",Loss)
	#Delete Older Saves
	if os.path.exists("SaveFiles"):
		shutil.rmtree("SaveFiles")
	#initialize Variables and start session
	summary = tf.summary.merge_all()
	init = tf.global_variables_initializer()
	saver = tf.train.Saver()
	sess = tf.Session()
	summary_writer = tf.summary.FileWriter("SaveFiles", graph=tf.get_default_graph())
	sess.run(init)
	#export Graph
	tf.train.write_graph(sess.graph_def, "SaveFiles", "Graph.pb")
	#Train Models
	MaxStep = 10000
	for Step in range(MaxStep):
		# make RandomBatches with 0 and 1
		NumberBatch = []
		for i in range(50):
			Number = float(random.randint(0,1))
			NumberBatch.append(Number)
		NumberBatch = np.array(NumberBatch)
		NumberBatch = np.expand_dims(NumberBatch, axis=1)
		# run Training Step
		sess.run(train_step,feed_dict={Y_:NumberBatch,X1:NumberBatch})
		if Step%2 == 0:
			summary_str = sess.run(summary, feed_dict={Y_:NumberBatch,X1:NumberBatch})
			summary_writer.add_summary(summary_str, Step)
			summary_writer.flush()

		# Save Everything at last step
		if Step+1 == MaxStep:
			checkpoint_file = os.path.join("SaveFiles", 'model.ckpt')
			saver.save(sess, checkpoint_file, global_step=Step)
	# Show Results for 0 and 1
	InputTensor0 = [0]
	InputTensor1 = [1]
	InputTensor0 = np.array(InputTensor0)
	InputTensor1 = np.array(InputTensor1)
	InputTensor0 = np.expand_dims(InputTensor0, axis=1)
	InputTensor1 = np.expand_dims(InputTensor1, axis=1)
	print("Input: 0  | Output: ",sess.run(Y,feed_dict={Y_:InputTensor0,X1:InputTensor0}))
	print("Input: 1  | Output: ",sess.run(Y,feed_dict={Y_:InputTensor1,X1:InputTensor1}))

# Freeze the graph
checkpoint_state_name = "checkpoint"
input_graph_name = "SaveFiles/Graph.pb"
output_graph_name = "SaveFiles/frozen_graph.pb"
input_saver_def_path = ""
input_binary = False
input_checkpoint_path = "SaveFiles/model.ckpt-9999"
output_node_names = "Layer2/Output" 
restore_op_name = "save/restore_all"
filename_tensor_name = "save/Const:0"
clear_devices = False
freeze_graph.freeze_graph(input_graph_name, input_saver_def_path,
                          input_binary, input_checkpoint_path,
                          output_node_names, restore_op_name,
                          filename_tensor_name, output_graph_name,
                          clear_devices,"")

Download the model and run it with python WorldSmallestNetFreeze.py All Files will be saved in the new Folder SaveFile. There should be the Checkpoint files, the graph definition Graph.pb and the frozen graph. Also it should show to Ouputs, one for the Input of 0 and another one for the Inputs of 1. We will use those to check if we really loaded a trained model.
Another way is to use the freeze tool in the command line. But we recommend to use the tool, like in the example, within your code. But if you have a trained model and want do freeze is, do the following. Change into the root directory of the downloaded TensorFlow repo and run the configure script.
./configure Especially for big models the next step is an time consuming process. Run the following command
bazel build tensorflow/python/tools:freeze_graph && \
bazel-bin/tensorflow/python/tools/freeze_graph \
–input_graph=<Path_to_repo>/WordlSmallestNet/Python/SaveFiles/Graph.pb \
–input_checkpoint=<Path_to_repo>/WordlSmallestNet/Python/SaveFiles/model.ckpt-9999 \
–output_graph=/tmp/frozen_graph.pb –output_node_names=Layer2/Output

Replace <Path_to_repo> with your local path. Use absolute paths and no shortcuts. I had trouble with ~/Documents instead of /home/dmayer/Documents. This example is using tf.name_scope to sort the layers, which is the reason for the Name of the Output Node Layer2/Output.
But how do we get the damn thing running in C++? Actually it is pretty easy.
In this tutorial we will use bazel to compile our project. As mentioned before it is also possible to work without bazel. First we have to create a folder in the tensorflow repo for our project. Mine looks like this:~/tensorflow/tensorflow/WorldSmallestNet/CPP
This folder contains two files:

  • The actual C++ file RunGraph.cpp
  • a file called BUILD with the instructions for Bazel

Both files can be downloaded from our GitHub Repo. Let’s have a look at the RunGraph.cpp file.

#include "tensorflow/core/public/session.h"
#include "tensorflow/core/framework/tensor.h"
#include "tensorflow/core/platform/env.h"

int main(int argc, char** argv) {

	std::string PathGraph = "/SaveFiles/frozen_graph.pb";

	//Setup Input Tensors 
	tensorflow::Tensor Input1(tensorflow::DT_FLOAT, tensorflow::TensorShape({1,1}));
	tensorflow::Tensor Input0(tensorflow::DT_FLOAT, tensorflow::TensorShape({1,1}));
	// Output
	std::vector output;
	Input1.scalar()() = 1.0;
	Input0.scalar()() = 0.0;

	//initial declaration Tensorflow
	tensorflow::Session* session;
	tensorflow::Status status;
	status = tensorflow::NewSession(tensorflow::SessionOptions(), &session);
	if (!status.ok()) {
   		std::cout << status.ToString() << "\n";
    	return 1;
    }
    // Define Graph
	tensorflow::GraphDef graph_def;
	status = ReadBinaryProto(tensorflow::Env::Default(),PathGraph, &graph_def);
	
	if (!status.ok()) {
    	std::cout << status.ToString() << "\n";
     	return 1;
   	}

   	// Add the graph to the session
  	status = session->Create(graph_def);
    if (!status.ok()) {
    	std::cout << status.ToString() << "\n";
        return 1;
    }
 
    // Feed dict
	std::vector> inputs = {
   		 { "Input:0", Input0},
    	};
		status = session->Run(inputs, {"Layer2/Output"},{}, &output);
		if (!status.ok()) {
   		 std::cout << status.ToString() << "\n";
   		return 1;
  		}
		auto Result = output[0].matrix();
		std::cout << "Input: 0 | Output: "<< Result(0,0) << std::endl;
	
	inputs = {
   		 { "Input:0", Input1},
    	};
		status = session->Run(inputs, {"Layer2/Output"},{}, &output);
		if (!status.ok()) {
   		 std::cout << status.ToString() << "\n";
   		return 1;
  		}
		auto Result1 = output[0].matrix();
		std::cout << "Input: 1 | Output: "<< Result1(0,0) << std::endl;	
} 

First of all, you need to change PathGraph with the path to your frozen graph. Another thing is, that we don't need to feed two placeholders. When we froze the graph, we specified our output Y. The second Placeholder (Y_) is only needed for the calculation of the Loss and not Y. If we would have specified the Loss as an ouput, the program would be also calling for the second Placeholder. Let's have a look at the BUILD file.

cc_binary(
    name = "RunGraph",
    srcs = ["RunGraph.cpp"],
    deps = [
        "//tensorflow/core:tensorflow",
    ]
)

Now it's time to build the whole thing. Run the configure script from the root of the tensorflow repo../configureNow change to your project folder and run bazel build :RunGraphThat will take some time. Go get yourself a cup of coffee, you earned it! If everything is compiled, you can find the executable file intensorflow/bazel-bin/tensorflow/WorldSmallestNet/CPP Execute it with ./RunGraph and look what happens. If everything worked out right, the model should give the exact output as the Python File before. You can also run the file in the project folder withbazel run :RunGraph
We hope this tutorial helped you. We currently work on a tutorial about serving tensorflow models. That's another more complicated, but more elegant way to save and load models. Stay tuned! Leave a comment if you have any questions. Happy TensorFlowing!

Leave a Reply

Your email address will not be published. Required fields are marked *

Just another Linux site