当前位置:网站首页>Make a virtual human with zego avatar | virtual anchor live broadcast solution
Make a virtual human with zego avatar | virtual anchor live broadcast solution
2022-07-29 04:23:00 【Zego instant developer】
Virtual live broadcast can realize single person video live broadcast , You can also invite the audience 、 Interact with the virtual anchor with multiple people .
Virtual live broadcast Scene Architecture Design
The main architecture of the virtual live broadcast scene is shown in the following figure ( Take the example of multi person live broadcast interaction ):![[ Failed to transfer the external chain picture , The origin station may have anti-theft chain mechanism , It is suggested to save the pictures and upload them directly (img-DLOBX2IR-1658893056502)(media/16582999264790/16583000681719.png)]](/img/8f/908a5648f6ce28ddf39a5d9d4e2067.png)
Virtual human live experience App Source code
ZEGO For virtual live broadcast Experience App Source code , For developers to further understand ZEGO Virtual live broadcast scheme .
Prerequisite
- Integrated in project ZEGO Express SDK, Please refer to Real time audio and video - Quick start - Integrate SDK.
- Integrated in project ZEGO Avatar SDK, Please refer to Avatar Virtual image - Quick start - Integrate SDK.
- Already in ZEGO Console Create project , And apply for a valid AppID and AppSign, Please refer to Console - project management Medium “ Project information ”.
Virtual live broadcast implementation process
The overall process of the virtual live broadcast scenario is as follows :
- After the virtual anchor enters the room , to ZEGO Avatar Set up avatars , Start collecting ZEGO Avatar Texture content , Preview and stream .
- When the audience enters the room , to ZEGO Avatar Set up avatars , And pull the flow .
- The host 、 Viewers are connected through signaling modules , The signaling module can control the live broadcast process in the current service room , Synchronize and notify each end of the current live broadcast status .
- Whether or not there is a Lianmai audience , Both the anchor and the audience passed ZEGO Audio and video cloud services push and pull streams .
- The audience requested to connect with the anchor , The signaling module will notify the anchor , And synchronize the personal information of the wheat linker .
- After the anchor accepts the Lianmai application , Lianmai audience began to collect Avatar Texture content and streaming , All members in the room will receive stream update notification , And pull the audio and video stream of Lianmai audience .
- If Lianmai audience no longer need Lianmai , Send a download request to the business background . After receiving the downlink notification from the signaling module , Even the audience stopped streaming 、 Stop to collect Avatar Texture content 、 Stop expression following , The anchor and other viewers in the room stop pulling the audience's stream .
The detailed flow chart of virtual human live broadcast is as follows :
1 Opening Virtual image Avatar service
Please contact the ZEGO The business personnel are AppID Opening Avatar service , In order to create a virtual image .
2 Initialize to build real-time audio and video ZEGO Express Video SDK
In the use of Express Video SDK Before making a video call , It needs to be initialized SDK. Due to the initialization operation SDK when , There are many internal processing operations , It is suggested that developers should use App At startup .
/** Definition SDK Engine object */
ZegoExpressEngine engine;
ZegoEngineProfile profile = new ZegoEngineProfile();
/** Please pass ZEGO The console gets , The format is 123456789L */
profile.appID = appID;
/** 64 Characters , Please pass ZEGO The console gets , The format is "0123456789012345678901234567890123456789012345678901234567890123" */
profile.appSign = appSign;
/** Universal scene access */
profile.scenario = ZegoScenario.GENERAL;
/** Set up app Of application object */
profile.application = getApplication();
/** Create the engine */
engine = ZegoExpressEngine.createEngine(profile, null);
Initializing Express Video SDK When you need to open RTC Custom collection ,Avatar Image is collected and pushed through custom texture . because Avatar The data of is in the opposite direction , So you need to set the image during initialization .
// Set the local preview and the video seen at the streaming end to be mirror images .(Avatar The image pushed is the opposite )
engine.setVideoMirrorMode(ZegoVideoMirrorMode.BOTH_MIRROR);
ZegoCustomVideoCaptureConfig videoCaptureConfig = new ZegoCustomVideoCaptureConfig();
// Set the custom video capture video frame data type to GL_TEXTURE_2D type
videoCaptureConfig.bufferType = ZegoVideoBufferType.GL_TEXTURE_2D;
engine.enableCustomVideoCapture(true, videoCaptureConfig, ZegoPublishChannel.MAIN);
More initialization Express Video SDK Please refer to : Real time audio and video - Quick start - Realize video call Of “3.1 Create the engine ”.
3 Create avatars
Before using virtual live , Create your own personal image . Please refer to Create avatars .
4 The virtual person logs into the live broadcast room
Before the anchor starts the live broadcast or the audience watches the live broadcast , You need to log in to the live room first . After receiving the successful callback of login room , Can be called directly Express Video SDK Interface to push and pull streams .
/** Create user */
ZegoUser user = new ZegoUser("Anchor");
/** Start logging into the room */
engine.loginRoom("MetaLive", user);
Use more Express Video SDK For details of logging into the live broadcast room, please refer to : Real time audio and video - Quick start - Realize video call Of “3.2 Log in to the room ”.
5 Set up a personal avatar
initialization ZegoCharacterHelper class , Set the virtual image of the person you have created , Personal image display for live broadcast .
//mZegoInteractEngine The initialization
if (mZegoInteractEngine == null) {
mZegoInteractEngine = ZegoAvatarService.getInteractEngine();
}
// initialization ZegoCharacterHelper class
if (mCharacterHelper == null) {
mCharacterHelper = new ZegoCharacterHelper(AvatarDataUtil.getResourcePath(context));
mCharacterHelper.setExtendPackagePath(AvatarDataUtil.getPackagesPath(context));
}
// Default bust , First turn off the animation
cameraViewState = ZegoAvatarViewState.half;
setBodyState(cameraViewState, false);
// Get default avatar data
String jsonDefaultStr = AvatarDefaultJson.getDefaultAvatarJson(isBoy,AvatarDefaultJson.isHead);
//isBoy by true It's a boy
if (isBoy) {
// Get the virtual image of the created boy
String jsonMaleStr = AvatarJsonMgr.getMaleJsonData(context);
// If the data of boys is empty, it will be set as the default image
mCharacterHelper.setAvatarJson(!TextUtils.isEmpty(jsonMaleStr) ? jsonMaleStr : jsonDefaultStr);
} else {
// Get the virtual image of the created girl
String jsonFemaleStr = AvatarJsonMgr.getFemaleJsonData(context);
// If the data of girls is empty, it will be set as the default image
mCharacterHelper.setAvatarJson(!TextUtils.isEmpty(jsonFemaleStr) ? jsonFemaleStr : jsonDefaultStr);
}
6 Single virtual anchor live
6.1 obtain ZEGO Avatar Texture content
Avatar The virtual image data of is through startCaptureAvatar Call back to the upper layer and push it out through custom collection . because Avatar Data is a transparent background ,RTC There is no background , The default color for conversion is black , Developers can set the background to the desired color .
// Set according to actual demand Avatar Returns the width of the content (width) And high (height)
AvatarCaptureConfig config = new AvatarCaptureConfig(width, height);
// Start collecting Avatar texture
mCharacterHelper.startCaptureAvatar(config, new OnAvatarCaptureCallback() {
@Override
public void onAvatarTextureAvailable(int textureID, int width, int height) {
// The background color is set to true To take effect
boolean useFBO = true;
if(mBgRender == null){
mBgRender = new TextureBgRender(textureID, useFBO, width, height, Texture2dProgram.ProgramType.TEXTURE_2D_BG);
}
if(mBgRender != null){
mBgRender.setInputTexture(textureID);
// Users need to modify the color value they need (RGB)
mBgRender.setBgColor(rColor, gColor, bColor, 1.0f);
mBgRender.draw(true);
}
// adopt RTC SDK Custom collection sendCustomVideoCaptureTextureData Push data
engine.sendCustomVideoCaptureTextureData(mBgRender.getOutputTextureID(), width, height, System.currentTimeMillis());
}
});
6.2 Virtual anchor starts preview and streaming
Host direction ZEGO Audio and video cloud service streaming , You need to generate your own unique StreamID, Then start previewing and streaming .
// Open Preview
engine.startPreview(new ZegoCanvas(preview_view));
// Push flow
engine.startPublishingStream("Anchor");
Use more Express Video SDK For details of preview and streaming, please refer to : Real time audio and video - Quick start - Realize video call Of “3.3 Push flow ”.
6.3 The audience pulled the stream
When the audience enters the room , Will receive Express Video SDK Stream update notification , To filter out the host stream StreamID Pull the flow .
// The audience pulled the stream
ZegoCanvas zegoCanvas = new ZegoCanvas(view);
zegoCanvas.viewMode = ZegoViewMode.ASPECT_FILL;
engine.startPlayingStream("Anchor",zegoCanvas);
Use more Express Video SDK Please refer to... For details of flow pulling : Real time audio and video - Quick start - Realize video call Of “3.4 Pull flow ”.
7 Audience and virtual anchor Lianmai
7.1 The virtual human connects with the audience to stream
The audience invokes the service background request connection interface , After the call is successful , The service background sends a request for connection custom signaling to the anchor . After the host receives the signaling , Call the business background to approve the connection interface , After the call is successful , The service background sends broadcast signaling of successful connection to all members in the room , After the Lianmai viewer receives the signal , Start pushing , The audience also followed 6.1 obtain Avatar Texture content The process of , hold Avatar The content of is pushed out through custom collection .
// Lianmai audience streaming
engine.startPublishingStream("Audience1");
7.2 Virtual anchor streaming
Even after the wheat audience pushed the stream , All members of the room will receive Express Video SDK Stream update notification , The anchor gets the audience stream of Lianmai StreamID Pull the flow .
Other visitors in the room also receive stream update callback , Get the audience stream of Lianmai StreamID Pull the flow .
// Anchor Pull stream
ZegoCanvas zegoCanvas = new ZegoCanvas(view);
zegoCanvas.viewMode = ZegoViewMode.ASPECT_FILL;
engine.startPlayingStream("Audience1",zegoCanvas);
7.3 The virtual human is connected with the audience
The connected audience calls the downstream interface of the business background , After the call is successful , The service background sends the broadcast signaling of the audience to all members in the room . After receiving the signaling, the Lianmai viewer stops streaming 、 Stop acquisition Avatar Texture content 、 Stop expression follow-up detection , Other spectators in the room stop streaming after receiving the signal .
// Lianmai viewers stop previewing and end streaming
engine.stopPreview();
engine.stopPublishingStream();
// Other members in the room end the flow
engine.stopPlayingStream("Audience1");
// Stop acquisition Avatar texture
public void stopCaptureAvatar() {
if (mCharacterHelper != null) {
mCharacterHelper.stopCaptureAvatar();
}
}
// Stop expression following
public void stopDetectExpression() {
if (mZegoInteractEngine != null && mZegoInteractEngine.isStarted()) {
mZegoInteractEngine.stopDetectExpression();
}
}
Use more Express Video SDK For details of stopping push / pull flow, please refer to : Real time audio and video - Quick start - Implementation process Of “4.2 Stop the push-pull flow ”.
7 Get more help - Instant virtual anchor solution
The virtual live broadcast scene is a new live broadcast method under the meta universe social entertainment mode , Virtual images replace real people , Create a different live experience , Support expression following 、 Gesture recognition triggers special effects and other games ; At the same time, the scene supports multiple virtual image video interaction , It is easier to attract users to participate in Lianmai interaction , Enhance users' consumption intention and stickiness . For more Virtual live broadcast document
Seventh anniversary benefits : Submit Form Contact business , If you have the opportunity to get it, you will build ZEGO Avatar Virtual image 1 Free trial for months .
边栏推荐
- Leftmost prefix principle of index
- SQL time fuzzy query datediff() function
- Target detection learning process
- Introduction and examples of parameters in Jenkins parametric construction
- The principle of inverse Fourier transform (IFFT) in signal processing
- Two forms of softmax cross entropy + numpy implementation
- The data source is SQL server. I want to configure the incremental data of the last two days of the date field updatedate to add
- settings.xml
- Laya中的A星寻路
- The third ACM program design competition of Wuhan University of Engineering
猜你喜欢

TypeError: Cannot read properties of undefined (reading ‘then‘)

Object detection: object_ Detection API +ssd target detection model

10. Fallback message

9. Delay queue

Jenkins 参数化构建中 各参数介绍与示例

不会就坚持58天吧 实现前缀树

Won't you just stick to 69 days? Merge range

14. Haproxy+kept load balancing and high availability

C language force buckle question 61 of the rotating list. Double ended queue and construction of circular linked list

Can you really write restful API?
随机推荐
Deep learning training strategy -- warming up the learning rate
String, array, generalized table (detailed)
Machine vision Series 1: Visual Studio 2019 dynamic link library DLL establishment
WebRTC实现简单音视频通话功能
MySQL - 深入解析MySQL索引数据结构
(.*?) regular expression
Kotlin's list, map, set and other collection classes do not specify types
Shielding ODBC load balancing mode in gbase 8A special scenarios?
Svg -- loading animation
C language: enumerating knowledge points summary
Model tuning, training model trick
Pytoch automatic mixing accuracy (AMP) training
redux快速上手
Interview notes of a company
不会就坚持70天吧 数组中第k大的数
Record of problems encountered in ROS learning
es6和commonjs对导入导出的值修改是否影响原模块
不会就坚持67天吧 平方根
不会就坚持68天吧 狒狒吃香蕉
Openfeign asynchronous call problem