文档:
这个点,相关的文档 关于PC上的资料只有寥寥几句,没有代码、没有DEMO,自己琢磨了几天,没走对方向,和客服你来我往拉锯了几天加投诉下,才给了点内部代码参考。
一、自定义视频发送
1、涉及的数据类型
1)、流属性
/**
* @locale zh
* @type keytype
* @brief 流属性
*/
/**
* @locale en
* @type keytype
* @brief Stream type
*/
enum StreamIndex {
/**
* @locale zh
* @brief 主流。包括:<br>
* + 由摄像头/麦克风通过内部采集机制,采集到的视频/音频; <br>
* + 通过自定义采集,采集到的视频/音频。
*/
/**
* @locale en
* @brief Mainstream, including: <br>
* + Video/audio captured by the the camera/microphone using internal capturing; <br>
* + Video/audio captured by custom method.
*/
kStreamIndexMain = 0,
/**
* @locale zh
* @brief 屏幕流。屏幕共享时共享的视频流,或来自声卡的本地播放音频流。
*/
/**
* @locale en
* @brief Screen-sharing stream. Video/Audio streams for screen sharing.
*/
kStreamIndexScreen = 1,
/**
* @hidden for internal use only
*/
kStreamIndex3rd,
/**
* @hidden for internal use only
*/
kStreamIndex4th,
/**
* @hidden for internal use only
*/
kStreamIndex5th,
/**
* @hidden for internal use only
*/
kStreamIndex6th,
/**
* @hidden for internal use only
*/
kStreamIndex7th,
/**
* @hidden for internal use only
*/
kStreamIndexMax,
};
2)、视频输入类型
/**
* @locale zh
* @type keytype
* @brief 视频输入源类型
*/
/**
* @locale en
* @type keytype
* @brief Video source type
*/
enum VideoSourceType {
/**
* @locale zh
* @brief 自定义采集视频源
*/
/**
* @locale en
* @brief Custom video source
*/
kVideoSourceTypeExternal = 0,
/**
* @locale zh
* @brief 内部采集视频源
*/
/**
* @locale en
* @brief Internal video capture
*/
kVideoSourceTypeInternal = 1,
/**
* @locale zh
* @brief 自定义编码视频源。 <br>
* 你仅需推送分辨率最大的一路编码后视频流,SDK 将自动转码生成多路小流
*/
/**
* @locale en
* @brief Custom encoded video source. <br>
* Push the encoded video stream with the largest resolution, and the SDK will automatically transcode to generate multiple lower-quality streams for Simulcast.
*/
kVideoSourceTypeEncodedWithAutoSimulcast = 2,
/**
* @locale zh
* @brief 自定义编码视频源。 <br>
* SDK 不会自动生成多路流,你需要自行生成并推送多路流
*/
/**
* @locale en
* @brief Custom encoded video source. <br>
* The SDK does not automatically generate multiple streams for Simulcast, you need to generate and push streams of different qualities.
*/
kVideoSourceTypeEncodedWithoutAutoSimulcast = 3,
};
3)、设置视频帧
/**
* @locale zh
* @type keytype
* @brief 设置视频帧
*/
/**
* @locale en
* @type keytype
* @brief Sets the video frame
*/
class IVideoFrame {
public:
/**
* @locale zh
* @brief 获取视频帧类型,参看 VideoFrameType{@link #VideoFrameType}
*/
/**
* @locale en
* @brief Gets video frame type, see VideoFrameType{@link #VideoFrameType}
*/
virtual VideoFrameType frameType() const = 0;
/**
* @locale zh
* @brief 获取视频帧格式,参看 VideoPixelFormat{@link #VideoPixelFormat}
*/
/**
* @locale en
* @brief Gets video frame format, see VideoPixelFormat{@link #VideoPixelFormat}
*/
virtual VideoPixelFormat pixelFormat() const = 0;
/**
* @locale zh
* @brief 获取视频内容类型,参看 VideoContentType{@link #VideoContentType}。
*/
/**
* @locale en
* @brief Gets video content type, see VideoContentType{@link #VideoContentType}.
*/
virtual VideoContentType videoContentType() const = 0;
/**
* @locale zh
* @brief 获取视频帧时间戳,单位:微秒
*/
/**
* @locale en
* @brief Gets video frame timestamp in microseconds
*/
virtual int64_t timestampUs() const = 0;
/**
* @locale zh
* @brief 获取视频帧宽度,单位:px
*/
/**
* @locale en
* @brief Gets video frame width in px
*/
virtual int width() const = 0;
/**
* @locale zh
* @brief 获取视频帧高度,单位:px
*/
/**
* @locale en
* @brief Gets video frame height in px
*/
virtual int height() const = 0;
/**
* @locale zh
* @brief 获取视频帧旋转角度,参看 VideoRotation{@link #VideoRotation}
*/
/**
* @locale en
* @brief Gets the video frame rotation angle, see VideoRotation{@link #VideoRotation}
*/
virtual VideoRotation rotation() const = 0;
/**
* @locale zh
* @hidden for internal use only
* @brief 获取镜像信息
* @return 是否需要镜像<br>
* + True: 是 <br>
* + False: 否
*/
/**
* @locale en
* @hidden for internal use only
* @brief Gets mirror information
* @return Is there a need to mirror the video: <br>
* + True: Yes <br>
* + False: No
*/
virtual bool flip() const = 0;
/**
* @locale zh
* @brief 获取视频帧颜色空间,参看 ColorSpace{@link #ColorSpace}
*/
/**
* @locale en
* @brief Gets video frame color space, see ColorSpace{@link #ColorSpace}
*/
virtual ColorSpace colorSpace() const = 0;
/**
* @locale zh
* @brief 视频帧颜色 plane 数量
* @note yuv 数据存储格式分为打包(packed)存储格式和平面(planar)存储格式,planar 格式中 Y、U、V 分平面存储,packed 格式中 Y、U、V 交叉存储
*/
/**
* @locale en
* @brief Video frame color plane number
* @note YUV formats are categorized into planar format and packed format. <br>
* In a planar format, the Y, U, and V components are stored separately as three planes, while in a packed format, the Y, U, and V components are stored in a single array.
*/
virtual int numberOfPlanes() const = 0;
/**
* @locale zh
* @brief 获取 plane 数据指针
* @param plane_index plane 数据索引
*/
/**
* @locale en
* @brief Gets plane data pointer
* @param plane_index Plane data index
*/
virtual uint8_t* getPlaneData(int plane_index) = 0;
/**
* @locale zh
* @brief 获取 plane 中数据行的长度
* @param plane_index plane 数据索引
*/
/**
* @locale en
* @brief Gets the length of the data line in the plane
* @param plane_index Plane data index
*/
virtual int getPlaneStride(int plane_index) = 0;
/**
* @locale zh
* @brief 获取扩展数据指针
* @param size 扩展数据字节数
*/
/**
* @locale en
* @brief Gets extended data pointer
* @param size Size of extended data in bytes
*/
virtual uint8_t* getExtraDataInfo(int& size) const = 0; // NOLINT
/**
* @locale zh
* @brief 获取补充数据指针
* @param size 补充数据字节数
*/
/**
* @locale en
* @brief Gets supplementary data pointer
* @param size Size of supplementary data in bytes
*/
virtual uint8_t* getSupplementaryInfo(int& size) const = 0; // NOLINT
/**
* @locale zh
* @brief 获取本地缓冲区指针
*/
/**
* @locale en
* @brief Gets local buffer pointer
*/
virtual void* getHwaccelBuffer() = 0;
/**
* @locale zh
* @brief 获取硬件加速Context对象(AKA Opengl Context, Vulkan Context)
*/
/**
* @locale en
* @brief Get hardware accelerate context(AKA Opengl Context, Vulkan Context)
*/
virtual void* getHwaccelContext() = 0;
#ifdef __ANDROID__
/**
* @locale zh
* @brief 获取硬件加速Context的Java对象(Only for Android, AKA Opengl Context)
* @return 返回JavaLocalRef, 当不再使用时,需要手动执行DeleteLocalRef(env, jobject)方法释放该对象
*/
/**
* @locale en
* @brief Get hardware accelerate context's java object(Only for Android, AKA Opengl Context)
* @return return JavaLocalRef, need delete manually by use DeleteLocalRef(env, jobject)
*/
virtual jobject getAndroidHwaccelContext() = 0;
#endif
/**
* @locale zh
* @brief 获取纹理矩阵(仅针对纹理类型的frame生效)
*/
/**
* @locale en
* @brief Get Texture matrix (only for texture type frame)
*/
virtual void getTexMatrix(float matrix[16]) = 0;
/**
* @locale zh
* @brief 获取纹理ID(仅针对纹理类型的frame生效)
*/
/**
* @locale en
* @brief Get Texture ID (only for texture type frame)
*/
virtual uint32_t getTextureId() = 0;
/**
* @locale zh
* @brief 浅拷贝视频帧并返回指针
*/
/**
* @locale en
* @brief Makes shallow copies of video frame and return pointer
*/
virtual IVideoFrame* shallowCopy() = 0;
/**
* @locale zh
* @brief 释放视频帧
* @note 调用 pushExternalVideoFrame{@link #IRTCVideo#pushExternalVideoFrame} 推送视频帧后,你不需要再调用此方法释放资源。
*/
/**
* @locale en
* @brief Releases video frame
* @note After calling pushExternalVideoFrame{@link #IRTCVideo#pushExternalVideoFrame} to push the video frame, you must not call this API to release the resource.
*/
virtual void release() = 0;
/**
* @locale zh
* @brief 转换为i420格式的视频帧
*/
/**
* @locale en
* @brief Converts video frames to i420 format
*/
virtual void toI420() = 0;
/**
* @locale zh
* @brief 获取视频帧的摄像头信息,参看 CameraID{@link #CameraID}
*/
/**
* @locale en
* @brief Get cameraId of the frame, see CameraID{@link #CameraID}
*/
virtual CameraID getCameraId() const = 0;
/**
* @locale zh
* @hidden for internal use only on Windows and Android
* @type api
* @brief 获取全景视频的 Tile 信息
* @return FoV(可视范围)随本端的头位姿实时更新获取到的视频帧,包括高清视野和低清背景。参见 FovVideoTileInfo{@link #FovVideoTileInfo}。
*/
/**
* @locale en
* @hidden for internal use only on Windows and Android
* @type api
* @brief Get Tile information from the panoramic video frames to enable the FoV (Field of View).
* @return Video frames in the FoV(filed of view) accroding to the head pose. Refer to FovVideoTileInfo{@link #FovVideoTileInfo} for more details.
*/
virtual FovVideoTileInfo getFovTile() = 0;
/**
* @hidden constructor/destructor
*/
protected:
/**
* @locale zh
* @hidden constructor/destructor
* @brief 析构函数
*/
/**
* @locale en
* @hidden constructor/destructor
* @brief Destructor
*/
virtual ~IVideoFrame() = default;
};
二 、发送自定义视频的方法
1、进入房间前后,设置自定义视频
推送外部视频帧前,必须调用 setVideoSourceType{@link #setVideoSourceType} 开启外部视频源采集。
int ret1 = m_video->setVideoSourceType(static_cast<bytertc::StreamIndex>(0), static_cast<bytertc::VideoSourceType>(0));
// int ret1 = m_video->setVideoSourceType(bytertc::kStreamIndexScreen, bytertc::kVideoSourceTypeExternal/*kVideoSourceTypeExternal*/);
2、自定义发送
0)、pushExternalVideoFrame 接口
/**
* @locale zh
* @type api
* @region 视频管理
* @brief 推送外部视频帧。
* @param frame 设置视频帧,参看 IVideoFrame{@link #IVideoFrame}。
* @return 方法调用结果:<br>
* + 0:成功;<br>
* + <0:失败。具体失败原因参看 ReturnStatus{@link #ReturnStatus}。
* @note
* + 支持格式:I420, NV12, RGBA, BGRA, ARGB <br>
* + 该函数运行在用户调用线程内 <br>
* + 推送外部视频帧前,必须调用 setVideoSourceType{@link #setVideoSourceType} 开启外部视频源采集。
*/
/**
* @locale en
* @type api
* @region Video Management
* @brief Pushes external video frames.
* @param frame Set the video frame. See IVideoFrame{@link #IVideoFrame}.
* @return API call result:<br>
* + 0: Success.<br>
* + <0: Failure. See ReturnStatus{@link #ReturnStatus} for specific reasons.
* @note
* + Support for I420, NV12, RGBA, BGRA, and ARGB.<br>
* + This function runs in the user calling thread. <br>
* + Before pushing external video frames, you must call setVideoSourceType{@link #setVideoSourceType} to turn on external video source capture.
*/
virtual int pushExternalVideoFrame(IVideoFrame* frame) = 0;
1)、发送RGBA 数据
void startPushFrames(bytertc::IRTCVideo* m_video) {
std::thread([m_video]() {
const int width = 320;
const int height = 240;
const int frameIntervalMs = 40;
const size_t bufferSize = static_cast<size_t>(width) * height * 4; // RGBA
// 1) 外部只分配一次
auto rgbaBuffer = std::make_unique<uint8_t[]>(bufferSize);
for (int i = 0; i < 2'500'000; ++i) {
// 2) 填充纯色(R→G→B 循环)
int color = i % 3;
for (int p = 0; p < width * height; ++p) {
rgbaBuffer[p * 4 + 0] = (color == 0) ? 255 : 0; // R
rgbaBuffer[p * 4 + 1] = (color == 1) ? 255 : 0; // G
rgbaBuffer[p * 4 + 2] = (color == 2) ? 255 : 0; // B
rgbaBuffer[p * 4 + 3] = 255; // A
}
// 3) 构造 VideoFrameBuilder
bytertc::VideoFrameBuilder builder;
builder.width = width;
builder.height = height;
builder.data[0] = rgbaBuffer.get();
builder.linesize[0] = width * 4;
builder.data[1] = nullptr;
builder.linesize[1] = 0;
builder.data[2] = nullptr;
builder.linesize[2] = 0;
// 4) 交由调用方管理,不让 SDK 释放
builder.memory_deleter = nullptr;
builder.pixel_fmt = bytertc::kVideoPixelFormatRGBA;
builder.rotation = bytertc::kVideoRotation0;
builder.timestamp_us = std::chrono::duration_cast<std::chrono::microseconds>(
std::chrono::steady_clock::now().time_since_epoch()
).count();
builder.color_space = bytertc::kColorSpaceYCbCrBT601FullRange;
// 5) 构建并推送帧
bytertc::IVideoFrame* pFrame = bytertc::buildVideoFrame(builder);
int ret = m_video->pushExternalVideoFrame(pFrame);
if (ret != 0) {
std::cerr << "pushExternalVideoFrame failed: " << ret << std::endl;
}
// 6) 控制帧率
std::this_thread::sleep_for(std::chrono::milliseconds(frameIntervalMs));
}
// rgbaBuffer 在此 lambda 结束时自动释放
}).detach();
}
2)、发送i420数据
void startPushI420Frames(bytertc::IRTCVideo* m_video) {
std::thread([m_video]() {
const int width = 320;
const int height = 240;
const int frameIntervalMs = 40;
// I420 格式参数计算
const size_t ySize = static_cast<size_t>(width) * height;
const size_t uvSize = ySize / 4; // 4:2:0 采样比例
const size_t totalSize = ySize + uvSize * 2;
// 使用预分配的环形缓冲区(示例使用 vector 模拟)
std::vector<uint8_t> yuvBuffer(totalSize * 3); // 预分配 3 帧空间
size_t frameIndex = 0;
for (int i = 0; i < 2'500'000; ++i) {
// 计算当前帧缓冲区位置
auto buffer = yuvBuffer.data() + frameIndex * totalSize;
// 生成 YUV 数据(示例:纯色填充)
uint8_t yValue = 0xBf; // BT.601 灰度值 18%
uint8_t uvValue = 0xE0; // BT.601 中性色度值
// 填充 Y 平面(亮度)
std::fill_n(buffer, ySize, yValue);
// 填充 U/V 平面(色度)
std::fill_n(buffer + ySize, uvSize, uvValue); // U 平面
std::fill_n(buffer + ySize + uvSize, uvSize, uvValue); // V 平面
// 构造 VideoFrameBuilder
bytertc::VideoFrameBuilder builder;
builder.width = width;
builder.height = height;
// 设置 I420 数据指针和行对齐
builder.data[0] = buffer; // Y 平面
builder.data[1] = buffer + ySize; // U 平面
builder.data[2] = buffer + ySize + uvSize;// V 平面
builder.linesize[0] = width; // Y 行字节数
builder.linesize[1] = width / 2; // U 行字节数(半宽)
builder.linesize[2] = width / 2; // V 行字节数(半宽)
// 设置元数据
builder.memory_deleter = nullptr; // 由 vector 自动管理内存
builder.pixel_fmt = bytertc::kVideoPixelFormatI420;
builder.rotation = bytertc::kVideoRotation0;
builder.timestamp_us = std::chrono::duration_cast<std::chrono::microseconds>(
std::chrono::steady_clock::now().time_since_epoch()
).count();
builder.color_space = bytertc::kColorSpaceYCbCrBT601LimitedRange;
// 构建并推送帧
bytertc::IVideoFrame* pFrame = bytertc::buildVideoFrame(builder);
if (!pFrame) {
std::cerr << "Failed to build I420 video frame" << std::endl;
continue;
}
int ret = m_video->pushExternalVideoFrame(pFrame);
if (ret != 0) {
std::cerr << "pushExternalVideoFrame failed: " << ret << std::endl;
}
// 更新帧索引(环形缓冲区)
frameIndex = (frameIndex + 1) % 3;
// 控制帧率
std::this_thread::sleep_for(std::chrono::milliseconds(frameIntervalMs));
}
}).detach();
}
startPushI420Frames(m_video);
3、编码参数
调用该方法前,SDK 默认仅发布一条分辨率为 640px × 360px,帧率为 15fps 的视频流。
/**
* @locale zh
* @type api
* @region 视频管理
* @brief <span id="IRTCVideo-setvideoencoderconfig-1"></span> 视频发布端设置期望发布的最大分辨率视频流参数,包括分辨率、帧率、码率、网络不佳时的回退策略等。 <br>
* 该接口支持设置一路视频流参数,设置多路参数请使用重载 API:[setVideoEncoderConfig](#IRTCVideo-setvideoencoderconfig-2)。
* @param max_solution 期望发布的最大分辨率视频流参数。参看 VideoEncoderConfig{@link #VideoEncoderConfig}。
* @return 方法调用结果: <br>
* + 0:成功 <br>
* + !0:失败
* @note
* + 你可以同时使用 enableSimulcastMode{@link #IRTCVideo#enableSimulcastMode} 方法来发布多路分辨率不同的流。具体而言,若期望发布多路不同分辨率的流,你需要在发布流之前调用本方法以及 enableSimulcastMode{@link #IRTCVideo#enableSimulcastMode} 方法开启多路流模式,SDK 会根据订阅端的设置智能调整发布的流数(最多发布 4 条)以及各路流的参数。其中,调用本方法设置的分辨率为各路流中的最大分辨率。具体规则参看[推送多路流](https://www.volcengine.com/docs/6348/70139)文档。<br>
* + 调用该方法前,SDK 默认仅发布一条分辨率为 640px × 360px,帧率为 15fps 的视频流。 <br>
* + 使用自定义采集时,必须调用该方法设置编码参数,以保证远端收到画面的完整性。<br>
* + 该方法适用于摄像头采集的视频流,设置屏幕共享视频流参数参看 setScreenVideoEncoderConfig{@link #IRTCVideo#setScreenVideoEncoderConfig}(Linux 不适用)。
*/
/**
* {en}
* @type api
* @region Video Management
* @brief <span id="IRTCVideo-setvideoencoderconfig-1"></span> Video publisher call this API to set the parameters of the maximum resolution video stream that is expected to be published, including resolution, frame rate, bitrate, and fallback strategy in poor network conditions.<br>
* You can only set configuration for one stream with this API. If you want to set configuration for multiple streams, Call [setVideoEncoderConfig](#IRTCVideo-setvideoencoderconfig-2).
* @param max_solution The maximum video encoding parameter. See VideoEncoderConfig{@link #VideoEncoderConfig}.
* @return API call result: <br>
* + 0: Success <br>
* + ! 0: Failure
* @note
* + You can use enableSimulcastMode{@link #IRTCVideo#enableSimulcastMode} simultaneously to publish streams with different resolutions. Specifically, if you want to publish multiple streams with different resolutions, you need to call this method and enable the simulcast mode with enableSimulcastMode{@link #IRTCVideo#enableSimulcastMode} before publishing your streams. The SDK will intelligently adjust the number of streams to be published (up to 4) and their parameters based on the settings of the subscribing end. The resolution set by calling this method will be the maximum resolution among the streams. For specific rules, please refer to [Simulcasting](https://docs.byteplus.com/en/byteplus-rtc/docs/70139).<br>
* + Without calling this API, SDK will only publish one stream for you with a resolution of 640px × 360px and a frame rate of 15fps. <br>
* + In custom capturing scenario, you must call this API to set encoding configurations to ensure the integrity of the picture received by the remote users.<br>
* + This API is applicable to the video stream captured by the camera, see setScreenVideoEncoderConfig{@link #IRTCVideo#setScreenVideoEncoderConfig} for setting parameters for screen sharing video stream.
*/
virtual int setVideoEncoderConfig(const VideoEncoderConfig& max_solution) = 0;
bytertc::VideoEncoderConfig encodeConfig;
encodeConfig.width = width;
encodeConfig.height = height;
encodeConfig.frame_rate = 25;
encodeConfig.encoder_preference = bytertc::kVideoEncodePreferenceBalance;
m_video->setVideoEncoderConfig(encodeConfig);