-
Notifications
You must be signed in to change notification settings - Fork 85
Getting started: using the new features of MIGraphX 0.5
MIGraphX 0.5 is primarily a performance and bugfix release. The release includes:
- Additional operators: split, ceil, floor
- Support for additional models including NASNet-a_Large for Tensorflow
- Simplified python interface
- Performance improvements
- A driver for exercising migraphx
There is a new driver as /opt/rocm/bin/migraphx-driver. The --help command shows options available
prompt$ /opt/rocm/bin/migraphx-driver --help
-h, --help
Show help
Commands:
perf
params
read
run
verify
compile
For example, the read command will read a file and print the internal graph from MIGraphx
/opt/rocm/bin/migraphx-driver read --onnx /home/mev/source/migraphx_onnx/torchvision/resnet50i64.onnx
Another example, the following command measures performance running an ONNX file using the driver
/opt/rocm/bin/migraphx-driver perf --onnx /home/mev/source/migraphx_onnx/torchvision/resnet50i64.onnx
The verify command checks internal consistency once read into MIGraphX
/opt/rocm/bin/migraphx-driver perf --onnx /home/mev/source/migraphx_onnx/torchvision/resnet50i64.onnx
The Python interface has been simplified so that it is no longer necessary to copy in parameters. An updated version of the webcam example from MIGraphX examples v0.2 is shown below.
A careful comparison with previous example shows it is no longer necessary to allocate the parameters on the GPU or to explicitly copy parameters to the GPU or results from the GPU. The Python interface handles this by default.
import numpy as np
import cv2
import json
import migraphx
# video settings
cap = cv2.VideoCapture(0)
cap.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 320)
cap.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 240)
ret, frame = cap.read()
# neural network settings
model = migraphx.parse_onnx("resnet50.onnx")
model.compile(migraphx.get_target("gpu"))
# get labels
with open('imagenet_class_index.json') as json_data:
class_idx = json.load(json_data)
idx2label = [class_idx[str(k)][1] for k in range(len(class_idx))]
# primary loop to read webcam images
count = 0
while (True):
# capture frame by frame
ret, frame = cap.read()
if ret: # check - some webcams need warmup operations on the frame
cropped = frame[16:304,8:232] # 224x224
trans = cropped.transpose(2,0,1) # convert HWC to CHW
# convert to float, normalize and make batch size = 1
image = np.ascontiguousarray(
np.expand_dims(trans.astype('float32')/256.0,0))
# display the frame
cv2.imshow('frame',cropped)
migraphx_result = model.run({'0':migraphx.argument(image)})
result = np.array(migraphx_result,copy=False)
idx = np.argmax(result[0])
print(idx2label[idx], " ", result[0][idx])
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# when everything is done, release the capture
cap.release()
cv2.destroyAllWindows()