Skip to content
This repository has been archived by the owner on Jan 13, 2023. It is now read-only.

Latest commit

 

History

History

Speech-gRPC-Nonstreaming

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Cloud Speech Nonstreaming gRPC Objective-C Sample

This app demonstrates how to make nonstreaming gRPC connections to the Cloud Speech API to recognize speech in recorded audio.

Prerequisites

  • An iOS API key for the Cloud Speech API (See the docs to learn more)
  • An OSX machine or emulator
  • Xcode 7
  • Cocoapods version 1.0 or later

Quickstart

  • Clone this repo and cd into this directory.
  • Run pod install to download and build Cocoapods dependencies.
  • Open the project by running open Speech.xcworkspace.
  • In Speech/SpeechRecognitionService.m, replace YOUR_API_KEY with the API key obtained above.
  • Build and run the app.

Running the app

  • As with all Google Cloud APIs, every call to the Speech API must be associated with a project within the Google Cloud Console that has the Speech API enabled. This is described in more detail in the getting started doc, but in brief:

  • Clone this repository on GitHub. If you have git installed, you can do this by executing the following command:

      $ git clone https://github.com/GoogleCloudPlatform/ios-docs-samples.git
    

    This will download the repository of samples into the directory ios-docs-samples.

  • cd into this directory in the repository you just cloned, and run the command pod install to prepare all Cocoapods-related dependencies.

  • open Speech.xcworkspace to open this project in Xcode. Since we are using Cocoapods, be sure to open the workspace and not Speech.xcodeproj.

  • In Xcode's Project Navigator, open the SpeechRecognitionService.m file within the Speech directory.

  • Find the line where the API_KEY is set. Replace the string value with the iOS API key obtained from the Cloud console above. This key is the credential used to authenticate all requests to the Speech API. Calls to the API are thus associated with the project you created above, for access and billing purposes.

  • You are now ready to build and run the project. In Xcode you can do this by clicking the 'Play' button in the top left. This will launch the app on the simulator or on the device you've selected. Be sure that the 'Speech' target is selected in the popup near the top left of the Xcode window.

  • Tap the Record Audio button. This uses a custom AudioController class to record audio to an in-memory instance of NSMutableData. As it runs, the AudioController logs the number of samples and average sample magnitude for each packet that it captures.

  • Say a few words, and then tap the Stop Recording button.

  • Press the Process Recorded Audio button. This causes the processAudio: method to call our custom SpeechRecognitionService class, which constructs and sends a gRPC request to the Speech API endpoint. Notice the options passed as members of the InitialRecognizeRequest object. When the API call returns, the results are displayed in the scrollable text area at the bottom of the screen.