我使用的caffe模型:https://github.com/BVLC/caffe/tree/ea455eb29393ebe6de9f14e88bfce9eae74edf6d/models/bvlc_alexnet
其中,需要下载deploy prototxt文件和caffmodel以供转换模型用。
首先将caffe模型转换为TensorFlow,参考:https://github.com/ethereon/caffe-tensorflow
但是将其生产pb模型参考了如下代码,见https://github.com/ethereon/caffe-tensorflow/issues/23
from googlenet import GoogleNet # the output python script of caffe2tensorflowimport tensorflow as tffrom freeze_graph import freeze_graph # tensorflow comes up with a tool allowing freeze graphx = tf.placeholder(tf.float32, shape=[1, 224, 224, 3])y = tf.placeholder(tf.float32, shape=[1, 1000])net = GoogleNet({ 'data': x})sess = tf.InteractiveSession()sess.run(tf.initialize_all_variables()) net.load('googlenet.npy', sess)saver = tf.train.Saver()saver.save(sess, 'chkpt', global_step=0, latest_filename='chkpt_state')tf.train.write_graph(sess.graph.as_graph_def(), './', googlenet.pb', False)input_saver_def_path = ''input_binary=Trueinput_checkpoint_path = 'chkpt-0'input_graph_path = 'googlenet.pb'output_graph_path = 'googlenet.pb'output_node_names = 'prob'restore_op_name = "save/restore_all"filename_tensor_name = "save/Const:0"clear_devices = Truefreeze_graph(input_graph_path, input_saver_def_path, input_binary, input_checkpoint_path, output_node_names, restore_op_name, filename_tensor_name, output_graph_path, clear_devices, "")
注意,input node name改变了,自己print graph就知道。
另外一个博客里 见 https://ndres.me/post/convert-caffe-to-tensorflow/ ,提到自己fork的caffe to TensorFlow可以生成pb,但是我自己尝试发现针对alexnet抛出了异常。因此就没有使用。
还有就是 https://www.cs.toronto.edu/~guerzhoy/tf_alexnet/ 里提到有现成的模型,原文如下:
AlexNet implementation + weights in TensorFlow
This is a quick and dirty AlexNet implementation in TensorFlow. You may also be interested in Davi Frossard's code/weights.
- -- the implementation itself + testing code for versions of TensorFlow current in 2017 (Python 3).
- -- for older versions of TensorFlowm in Python 2(See for a variable rather than placeholder input; you probably want the myalexnet_forward.py version if you want to fine-tune the networks.)
- -- the weights; they need to be in the working directory
- -- the classes, in the same order as the outputs of the network
- , , , , -- test images (images should be 227x227x3)
Credits:
- The model and weights are from
- The weights were converted using , and code was taken from there as well
但是我自己使用的时候发现会抛出异常,可能是因为我使用的TensorFlow 1.3版本问题。因此最后也没有使用。
其次,就是模型压缩,命令为:
~/tensorflow-master/transform_graph/tensorflow/tools/graph_transforms/transform_graph --in_graph=alexnet.pb --outputs="prob" --out_graph=quantized_alexnet.pb --transforms='quantize_weights'
最后,导入pb到andriod项目里,我参考了:http://blog.csdn.net/u014432647/article/details/74743332和https://github.com/ppplinday/alexnet-tensorflow-android-withoutfc67
注意我是自己bazel compile TensorFlow 1.0生成的arm64-v8a的inferance so和jar文件。
值得一提的是,在https://github.com/ppplinday/alexnet-tensorflow-android-withoutfc67里提到,其使用的模型在android红米机器上inference时间是0.713s,华为mate10上稍微快点,是0.6s。
贴下http://blog.csdn.net/u014432647/article/details/74743332 原文内容:
首先先说明一下这个alexnet是进过修改的网络,由于模型太大,我把alexnet的全连接层6和全连接7给删除了,测试手机是红米Note 4X,cpu是高通骁龙625。
分别放出测试代码的github
arm compute library:tensorflow:
测试结果为:
arm compute lilrary:0.422s tensorflow:0.713sarm compute library比tensorflow快了差不多百分之40。arm这个也是有一定的优化,但是现在还是有挺多bug,而且接口暂时只提供C++,可以等之后成熟了再使用。
大致过程记录在此,以备忘。