- You have read Building a Meeting Application using the Amazon Chime SDK. You understand the basic architecture of Amazon Chime SDK and deployed a serverless/browser demo meeting application.
- You have a basic to intermediate understanding of iOS development and tools.
- You have installed Xcode version 11.3 or later.
Note: Deploying the serverless/browser demo and receiving traffic from the demo created in this post can incur AWS charges.
To declare the Amazon Chime SDK as a dependency, you must complete the following steps.
- Follow the steps in the Setup section in the README file to download and import the Amazon Chime SDK.
- Add
Privacy - Camera Usage Description
to theInfo.plist
of your Xcode project. If you are planning to have users join withAudioDeviceCapabilities.inputAndOutput
(which is the default option), then also addPrivacy - Microphone Usage Description
to theInfo.plist
. - Request microphone and camera permissions. You can use
AVAudioSession.recordPermission
andAVCaptureDevice.authorizationStatus
by handling the response synchronously and falling back to requesting permissions. You can also userequestRecordPermission
andrequestAccess
with an asynchronous completion handler.
// Only needed if joining meetings with `AudioDeviceCapabilities.inputAndOutput`
switch AVAudioSession.sharedInstance().recordPermission {
case AVAudioSessionRecordPermission.granted:
// You can use audio.
...
case AVAudioSessionRecordPermission.denied:
// You may not use audio. Your application should handle this.
// One way to handle this case is to have the user join with `AudioDeviceCapabilities.outputOnly` or
// `AudioDeviceCapabilities.none` instead, which do not require microphone permissions.
...
case AVAudioSessionRecordPermission.undetermined:
// You must request permission.
AVAudioSession.sharedInstance().requestRecordPermission({ (granted) in
if granted {
...
} else {
// The user rejected your request.
}
})
}
// Request permission for video. You can similarly check
// using AVCaptureDevice.authorizationStatus(for: .video).
AVCaptureDevice.requestAccess(for: .video)
To start a meeting, you need to create a meeting session. We provide DefaultMeetingSession
as an actual implementation of the protocol MeetingSession
. DefaultMeetingSession
takes in both MeetingSessionConfiguration
and ConsoleLogger
.
- Create a
ConsoleLogger
for logging.
let logger = ConsoleLogger(name: "MeetingViewController")
- Make a POST request to
server_url
to create a meeting and an attendee. Theserver_url
is the URL of the serverless demo meeting application you deployed (see Prerequisites section). Note: usehttps://xxxxx.xxxxx.xxx.com/Prod/
.
var url = "\(server_url)join?title=\(meetingId)&name=\(attendeeName)®ion=\(meetingRegion)"
url = encodeStrForURL(str: url)
// Helper function for URL encoding.
public static func encodeStrForURL(str: String) -> String {
return str.addingPercentEncoding(withAllowedCharacters: .urlQueryAllowed) ?? str
}
- Create a
MeetingSessionConfiguration
. JSON response of the POST request contains data required for constructing aCreateMeetingResponse
and aCreateAttendeeResponse
.
let joinMeetingResponse = try jsonDecoder.decode(MeetingResponse.self, from: data)
// Construct CreatMeetingResponse and CreateAttendeeResponse.
let meetingResp = CreateMeetingResponse(meeting:
Meeting(
externalMeetingId: joinMeetingResponse.joinInfo.meeting.meeting.externalMeetingId,
mediaPlacement: MediaPlacement(
audioFallbackUrl: joinMeetingResponse.joinInfo.meeting.meeting.mediaPlacement.audioFallbackUrl,
audioHostUrl: joinMeetingResponse.joinInfo.meeting.meeting.mediaPlacement.audioHostUrl,
signalingUrl: joinMeetingResponse.joinInfo.meeting.meeting.mediaPlacement.signalingUrl,
turnControlUrl: joinMeetingResponse.joinInfo.meeting.meeting.mediaPlacement.turnControlUrl
),
mediaRegion: joinMeetingResponse.joinInfo.meeting.meeting.mediaRegion,
meetingId: joinMeetingResponse.joinInfo.meeting.meeting.meetingId
)
)
let attendeeResp = CreateAttendeeResponse(attendee:
Attendee(attendeeId: joinMeetingResponse.joinInfo.attendee.attendee.attendeeId,
externalUserId: joinMeetingResponse.joinInfo.attendee.attendee.externalUserId,
joinToken: joinMeetingResponse.joinInfo.attendee.attendee.joinToken
)
)
// Construct MeetingSessionConfiguration.
let meetingSessionConfig = MeetingSessionConfiguration(
createMeetingResponse: currentMeetingResponse,
createAttendeeResponse: currentAttendeeResponse
)
- Now create an instance of
DefaultMeetingSession
.
let currentMeetingSession = DefaultMeetingSession(
configuration: meetingSessionConfig,
logger: logger
)
AudioVideoFacade
is used to control audio and video experience. Inside the DefaultMeetingSession
object, audioVideo
is an instance variable of type AudioVideoFacade
.
- To start audio, you can call start on
AudioVideoFacade
.
do {
try self.currentMeetingSession?.audioVideo.start()
} catch PermissionError.audioPermissionError {
// Handle the case where no permission is granted.
} catch {
// Catch other errors.
}
- You can turn local audio on and off by calling the mute and unmute APIs.
// Mute audio.
self.currentMeetingSession?.audioVideo.realtimeLocalMute()
// Unmute audio.
self.currentMeetingSession?.audioVideo.realtimeLocalUnmute()
- There are two sets of APIs for starting and stopping video.
startLocalVideo
andstopLocalVideo
are for turning on and off the camera on the user’s device.startRemoteVideo
andstopRemoteVideo
are for receiving videos from other participants on the same meeting.
// Start local video.
do {
try self.currentMeetingSession?.audioVideo.startLocalVideo()
} catch PermissionError.videoPermissionError {
// Handle the case where no permission is granted.
} catch {
// Catch some other errors.
}
// Start remote video.
self.currentMeetingSession?.audioVideo.startRemoteVideo()
// Stop local video.
self.currentMeetingSession?.audioVideo.stopLocalVideo()
// Stop remote video.
self.currentMeetingSession?.audioVideo.stopRemoteVideo()
- You can switch the camera for local video between front-facing and rear-facing. Call
switchCamera
and have different logic based on the camera type returned by callinggetActiveCamera
.
self.currentMeetingSession?.audioVideo.switchCamera()
// Add logic to respond to camera type change.
switch self.currentMeetingSession?.audioVideo.getActiveCamera().type {
case MediaDeviceType.videoFrontCamera:
...
case MediaDeviceType.videoBackCamera:
...
default:
...
}
By implementing videoTileDidAdd
and videoTileDidRemove
on the VideoTileObserver
, you can track the currently active video tiles. The video track can come from either camera or screen share.
DefaultVideoRenderView
is used to render the frames of videos on UIImageView
. Once you have both VideoTileState
and a DefaultVideoRenderView
, you can bind them by calling bindVideoView
.
// Register the observer.
audioVideo.addVideoTileObserver(observer: VideoTileObserver)
func videoTileDidAdd(tileState: VideoTileState) {
logger.info(msg: "Video tile added, titleId: \(tileState.tileId), attendeeId: \(tileState.attendeeId), isContent: \(tileState.isContent)")
showVideoTile(tileState)
}
func videoTileDidRemove(tileState: VideoTileState) {
logger.info(msg: "Video tile removed, titleId: \(tileState.tileId), attendeeId: \(tileState.attendeeId)")
// Unbind the video tile to release the resource
audioVideo.unbindVideoView(tileId)
}
// It could be remote or local video.
func showVideoTile(_ tileState: VideoTileState) {
// Render the DefaultVideoRenderView
//Bind the video tile to the DefaultVideoRenderView
audioVideo.bindVideoView(someDefaultVideoRenderView, tileState.tileId)
}
After building and running your iOS application, you can verify the end-to-end behavior. Test it by joining the same meeting from your iOS device and a browser (using the demo application you set up in the prerequisites).
If you no longer want to keep the demo active in your AWS account and want to avoid incurring AWS charges, the demo resources can be removed. Delete the two AWS CloudFormation stacks created in the prerequisites that can be found in the AWS CloudFormation console.