Mp4ToHLS
流程
- 使用 native‑media 库将指定的 MP4 文件转换为 HLS 分段,分段时长为 segmentDuration 秒,起始编号根据 sceneIndex 定义。
- 生成的 TS 分段文件将存放在与 playlistUrl 相同的目录下(文件名按照 tsPattern 命名,例如 "segment_%03d.ts")。
- 生成一个包含每个 TS 分段 EXTINF 信息的 m3u8 片段。
- 将生成的 m3u8 片段追加到 playlistUrl 指定的播放列表文件中,从而实现动态更新播放列表。
Java
package com.litongjava.media;
import java.io.File;
import com.litongjava.media.utils.LibraryUtils;
public class NativeMedia {
static {
LibraryUtils.load();
}
/**
* Initializes and loads the library
*/
public static void init() {
}
/**
* Splits an MP3 file into smaller MP3 files of specified size.
*
* @param srcPath Path to the source MP3 file
* @param size Maximum size in bytes for each output file
* @return Array of paths to the generated MP3 files
*/
public static native String[] splitMp3(String srcPath, long size);
/**
* Converts an MP4 video file to an MP3 audio file.
*
* @param inputPath Path to the input MP4 file
* @return Path to the output MP3 file on success, or an error message on failure
*/
public static native String mp4ToMp3(String inputPath);
public static native String toMp3(String inputPath);
public static native String convertTo(String inputPath, String targetFormat);
public static native String[] split(String srcPath, long size);
public static native String[] supportFormats();
/**
* Converts an MP4 file into HLS segments, and appends the generated TS files and m3u8
* playlist segments to the specified playlist file.
* Note: The playlistUrl must include the full path, for example "./data/hls/{sessionId}/playlist.m3u8",
* and the TS segment files will be stored in that directory.
*
* @param playlistUrl Path to the playlist file
* @param inputMp4Path Path to the input MP4 file
* @param sceneIndex Current scene index (used to determine the starting segment number for conversion)
* @param segmentDuration Segment duration in seconds
*/
public static String appendMp4ToHLS(String playlistUrl, String inputMp4Path, int sceneIndex, int segmentDuration) {
// Extract the directory from the playlistUrl
File playlistFile = new File(playlistUrl);
String directory = playlistFile.getParent();
// Construct the naming pattern for TS segment files, for example: ./data/hls/{sessionId}/segment_%03d.ts
String tsPattern = directory + "/segment_%03d.ts";
// Call the native method to complete the conversion from MP4 to HLS and update the playlist
return splitMp4ToHLS(playlistUrl, inputMp4Path, tsPattern, sceneIndex, segmentDuration);
}
/**
* Native method to be implemented by C.
*
* The implementation should include:
* 1. Using the native‑media library to convert the MP4 file specified by inputMp4Path into HLS segments,
* with each segment lasting segmentDuration seconds, and the starting segment number determined by sceneIndex.
* 2. Naming the generated TS segment files according to tsPattern, and storing them in the same directory
* as the playlistUrl file.
* 3. Generating an m3u8 segment based on the converted TS segment information (including the #EXTINF tag
* and the TS file name for each segment).
* 4. Appending the generated m3u8 segment content to the playlist file specified by playlistUrl (ensure that
* the playlist does not contain the #EXT-X-ENDLIST tag, otherwise the append will be ineffective).
*
* @param playlistUrl Full path to the playlist file
* @param inputMp4Path Path to the input MP4 file
* @param tsPattern Naming template for TS segment files (including the directory path)
* @param sceneIndex Current scene index (starting segment number for conversion)
* @param segmentDuration Segment duration in seconds
*/
public static native String splitMp4ToHLS(String playlistUrl, String inputMp4Path, String tsPattern, int sceneIndex, int segmentDuration);
}
说明
路径处理
通过new File(playlistUrl).getParent()
获取播放列表文件所在的目录,确保生成的 TS 文件都放在相同目录下。TS 文件命名模板
采用形如"segment_%03d.ts"
的格式命名 TS 文件,C 语言开发者可根据实际情况对分段文件进行编号与生成。Native 方法
nativeAppendMp4ToHLS
方法为本地方法,由 C 语言开发者实现。实现时应完成 MP4 转 HLS、TS 文件生成、m3u8 片段生成以及将 m3u8 片段追加到现有播放列表文件中的所有逻辑。
使用
NativeMedia.appendMp4ToHLS(playlistUrl, relPath, scene_index, 10);
即可实现将新的 MP4 文件转换后的 TS 分段追加到播放列表中。
package com.litongjava.linux.service;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import java.util.concurrent.ConcurrentHashMap;
import com.jfinal.kit.Kv;
import com.litongjava.linux.vo.HlsSession;
import com.litongjava.media.NativeMedia;
import com.litongjava.model.body.RespBodyVo;
import com.litongjava.tio.http.common.UploadFile;
import com.litongjava.tio.utils.SystemTimer;
import com.litongjava.tio.utils.hutool.FileUtil;
import com.litongjava.tio.utils.snowflake.SnowflakeIdUtils;
import lombok.extern.slf4j.Slf4j;
/**
* HLS 业务处理 Service
*/
@Slf4j
public class HlsService {
// 内存中存储会话信息,实际场景建议使用持久化存储
private static ConcurrentHashMap<Long, HlsSession> sessionMap = new ConcurrentHashMap<>();
/**
* 会话初始化,生成播放列表 URL 并创建会话记录
*
* @param sessionId 客户端传入的会话 ID,如为空则自动生成
* @param timestamp 时间戳(备用)
* @return 包含 stream_url 的响应体
*/
public RespBodyVo startSession(Long sessionId, Long timestamp) {
if (sessionId == null) {
sessionId = SnowflakeIdUtils.id();
}
if (timestamp == null) {
timestamp = SystemTimer.currTime;
}
String playFilePath = createPlaylistFile(sessionId);
HlsSession session = new HlsSession(sessionId, playFilePath);
sessionMap.put(sessionId, session);
Kv kv = Kv.by("stream_url", playFilePath);
return RespBodyVo.ok(kv);
}
/**
* 根据 sessionId 创建对应目录,并生成初始的 playlist.m3u8 文件
*
* @param sessionId 会话 ID
*/
public static String createPlaylistFile(Long sessionId) {
// 拼接目录路径和文件路径,例如: ./data/hls/{sessionId}/playlist.m3u8
String dirPath = "./data/hls/" + sessionId;
File dir = new File(dirPath);
if (!dir.exists()) {
// 创建目录及其所有父目录
dir.mkdirs();
}
String filePath = dirPath + "/playlist.m3u8";
File file = new File(filePath);
// 定义初始播放列表内容,包含 HLS 必须的头信息
String content = "#EXTM3U\n" + "#EXT-X-VERSION:3\n" + "#EXT-X-TARGETDURATION:10\n" + "#EXT-X-MEDIA-SEQUENCE:0\n";
// 写入文件
try (FileWriter writer = new FileWriter(file)) {
writer.write(content);
writer.flush();
} catch (IOException e) {
log.error("Failed to create playlist file: " + e.getMessage());
e.printStackTrace();
}
return filePath;
}
/**
* 上传场景文件(模拟处理)
*
* @param sessionId 会话 ID
* @param sceneI_index 场景序号
* @param fileData 文件数据(实际代码中应处理 multipart 文件)
* @return 返回 session_id 与场景序号
*/
public RespBodyVo uploadScene(Long sessionId, Integer scene_index, UploadFile uploadFile) {
if (sessionId == null || !sessionMap.containsKey(sessionId)) {
return RespBodyVo.fail("Invalid sessionId");
}
String relPath = "./data/hls/" + sessionId + "/scene_" + scene_index + ".mp4";
File file = new File(relPath);
file.getParentFile().mkdirs();
FileUtil.writeBytes(uploadFile.getData(), file);
HlsSession hlsSession = sessionMap.get(sessionId);
String playlistUrl = hlsSession.getPlayFilePath();
String appendMp4ToHLS = NativeMedia.appendMp4ToHLS(playlistUrl, relPath, scene_index, 10);
log.info(appendMp4ToHLS);
Kv kv = Kv.by("session_id", sessionId).set("sceneI_index", scene_index);
return RespBodyVo.ok(kv);
}
/**
* 获取播放列表 URL
*
* @param sessionId 会话 ID
* @return 包含 stream_url 的响应体
*/
public RespBodyVo getStreamUrl(Long sessionId) {
if (sessionId == null || !sessionMap.containsKey(sessionId)) {
return RespBodyVo.fail("Session not found");
}
HlsSession session = sessionMap.get(sessionId);
Kv kv = Kv.by("stream_url", session.getPlayFilePath());
return RespBodyVo.ok(kv);
}
/**
* 查询当前处理状态(这里简单返回播放列表 URL,可扩展更多状态信息)
*
* @param sessionId 会话 ID
* @return 状态信息响应体
*/
public RespBodyVo getStatus(Long sessionId) {
if (sessionId == null || !sessionMap.containsKey(sessionId)) {
return RespBodyVo.fail("Session not found");
}
HlsSession session = sessionMap.get(sessionId);
Kv kv = Kv.by("stream_url", session.getPlayFilePath());
return RespBodyVo.ok(kv);
}
/**
* 结束会话,模拟在播放列表中追加 EXT‑X‑ENDLIST 标签
*
* @param sessionId 会话 ID
* @param finishTime 结束时间戳(备用)
* @return 结束响应体
*/
public RespBodyVo finishSession(Long sessionId, Long finishTime) {
if (sessionId == null || !sessionMap.containsKey(sessionId)) {
return RespBodyVo.fail("Session not found");
}
HlsSession session = sessionMap.get(sessionId);
session.setFinished(true);
String playlistUrl = session.getPlayFilePath();
// 将 #EXT-X-ENDLIST 写入文件
try (BufferedWriter writer = new BufferedWriter(new FileWriter(playlistUrl, true))) {
writer.write("#EXT-X-ENDLIST");
writer.newLine();
} catch (IOException e) {
e.printStackTrace();
return RespBodyVo.fail("Failed to write #EXT-X-ENDLIST to file: " + e.getMessage());
}
return RespBodyVo.ok();
}
}
package com.litongjava.linux.service;
import java.io.File;
import org.junit.Test;
import com.litongjava.jfinal.aop.Aop;
import com.litongjava.tio.http.common.UploadFile;
import com.litongjava.tio.utils.hutool.FileUtil;
import com.litongjava.tio.utils.snowflake.SnowflakeIdUtils;
public class HlsServiceTest {
@Test
public void test() {
HlsService hlsService = Aop.get(HlsService.class);
long sessionId = SnowflakeIdUtils.id();
hlsService.startSession(sessionId, System.currentTimeMillis());
String filePath = "G:\\video\\CombinedScene.mp4";
File file = new File(filePath);
byte[] bytes = FileUtil.readBytes(file);
UploadFile uploadFile = new UploadFile(file.getName(), bytes);
hlsService.uploadScene(sessionId, 1, uploadFile);
hlsService.finishSession(sessionId, System.currentTimeMillis());
}
}
native_mp4_to_hls.c
#include "com_litongjava_media_NativeMedia.h"
#include <jni.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <stdint.h>
#ifdef _WIN32
#include <windows.h>
#endif
// FFmpeg header files (ensure the FFmpeg development environment is properly configured)
#include <libavformat/avformat.h>
#include <libavutil/avutil.h>
#include <libavutil/opt.h>
/**
* Helper function: Convert FFmpeg error code to a string
*/
static void print_error(const char *msg, int errnum) {
char errbuf[128] = {0};
av_strerror(errnum, errbuf, sizeof(errbuf));
fprintf(stderr, "%s: %s\n", msg, errbuf);
}
/**
* JNI method implementation:
* Java declaration:
* public static native String splitMp4ToHLS(String playlistUrl, String inputMp4Path, String tsPattern, int sceneIndex, int segmentDuration);
*
* Real implementation steps:
* 1. Open the input MP4 file and retrieve stream information.
* 2. Allocate an output AVFormatContext using the "hls" muxer, with the output URL specified as playlistUrl.
* 3. Set HLS options:
* - hls_time: segment duration
* - start_number: starting segment number (sceneIndex)
* - hls_segment_filename: TS segment file naming template (including directory)
* - hls_flags=append_list: append if the playlist already exists
* - hls_list_size=0: do not limit the playlist length
* - hls_playlist_type=event: avoid writing EXT‑X‑ENDLIST to allow future appending
* 4. Create an output stream for each input stream (using copy mode).
* 5. Open the output file (if required) and write the header.
* 6. Read input packets, adjust timestamps, and write to the output to complete the segmentation.
* 7. Write trailer and release resources.
* 8. Return a feedback message.
*/
JNIEXPORT jstring JNICALL Java_com_litongjava_media_NativeMedia_splitMp4ToHLS(JNIEnv *env, jclass clazz,
jstring playlistUrlJ,
jstring inputMp4PathJ, jstring tsPatternJ,
jint sceneIndex, jint segmentDuration) {
int ret = 0;
AVFormatContext *ifmt_ctx = NULL;
AVFormatContext *ofmt_ctx = NULL;
AVDictionary *opts = NULL;
int *stream_mapping = NULL;
int stream_mapping_size = 0;
AVPacket pkt;
// Retrieve JNI string parameters (all in UTF-8 encoding)
const char *playlistUrl = (*env)->GetStringUTFChars(env, playlistUrlJ, NULL);
const char *inputMp4Path = (*env)->GetStringUTFChars(env, inputMp4PathJ, NULL);
const char *tsPattern = (*env)->GetStringUTFChars(env, tsPatternJ, NULL);
// Check if the playlist file already contains EXT-X-ENDLIST (appending is not allowed)
{
FILE *checkFile = NULL;
#ifdef _WIN32
int size_needed = MultiByteToWideChar(CP_UTF8, 0, playlistUrl, -1, NULL, 0);
wchar_t *wPlaylistUrl = (wchar_t *) malloc(size_needed * sizeof(wchar_t));
if (wPlaylistUrl) {
MultiByteToWideChar(CP_UTF8, 0, playlistUrl, -1, wPlaylistUrl, size_needed);
checkFile = _wfopen(wPlaylistUrl, L"r");
free(wPlaylistUrl);
}
#else
checkFile = fopen(playlistUrl, "r");
#endif
if (checkFile) {
fseek(checkFile, 0, SEEK_END);
long size = ftell(checkFile);
fseek(checkFile, 0, SEEK_SET);
char *content = (char *) malloc(size + 1);
if (content) {
fread(content, 1, size, checkFile);
content[size] = '\0';
if (strstr(content, "#EXT-X-ENDLIST") != NULL) {
fclose(checkFile);
free(content);
(*env)->ReleaseStringUTFChars(env, playlistUrlJ, playlistUrl);
(*env)->ReleaseStringUTFChars(env, inputMp4PathJ, inputMp4Path);
(*env)->ReleaseStringUTFChars(env, tsPatternJ, tsPattern);
return (*env)->NewStringUTF(env, "Playlist already contains #EXT-X-ENDLIST, cannot append");
}
free(content);
}
fclose(checkFile);
}
}
// Initialize FFmpeg (newer versions do not require av_register_all)
// Open the input MP4 file
if ((ret = avformat_open_input(&ifmt_ctx, inputMp4Path, NULL, NULL)) < 0) {
print_error("Unable to open input file", ret);
goto end;
}
if ((ret = avformat_find_stream_info(ifmt_ctx, NULL)) < 0) {
print_error("Unable to retrieve stream info", ret);
goto end;
}
// Allocate output context using the hls muxer, with output URL specified as playlistUrl
ret = avformat_alloc_output_context2(&ofmt_ctx, NULL, "hls", playlistUrl);
if (ret < 0 || !ofmt_ctx) {
print_error("Unable to allocate output context", ret);
goto end;
}
// Set HLS options (to be passed to avformat_write_header)
{
char seg_time_str[16] = {0};
snprintf(seg_time_str, sizeof(seg_time_str), "%d", segmentDuration);
av_dict_set(&opts, "hls_time", seg_time_str, 0);
char start_num_str[16] = {0};
snprintf(start_num_str, sizeof(start_num_str), "%d", sceneIndex);
av_dict_set(&opts, "start_number", start_num_str, 0);
// Use the provided tsPattern as the TS segment file naming template
av_dict_set(&opts, "hls_segment_filename", tsPattern, 0);
// Set append mode, unlimited playlist, and event type (to prevent writing EXT-X-ENDLIST)
av_dict_set(&opts, "hls_flags", "append_list", 0);
av_dict_set(&opts, "hls_list_size", "0", 0);
av_dict_set(&opts, "hls_playlist_type", "event", 0);
}
// Create an output stream for each input stream (copy mode)
stream_mapping_size = ifmt_ctx->nb_streams;
stream_mapping = av_malloc_array(stream_mapping_size, sizeof(int));
if (!stream_mapping) {
ret = AVERROR(ENOMEM);
goto end;
}
for (int i = 0, j = 0; i < ifmt_ctx->nb_streams; i++) {
AVStream *in_stream = ifmt_ctx->streams[i];
AVCodecParameters *in_codecpar = in_stream->codecpar;
// Only copy video, audio, or subtitle streams (filter as needed)
if (in_codecpar->codec_type != AVMEDIA_TYPE_VIDEO &&
in_codecpar->codec_type != AVMEDIA_TYPE_AUDIO &&
in_codecpar->codec_type != AVMEDIA_TYPE_SUBTITLE) {
stream_mapping[i] = -1;
continue;
}
stream_mapping[i] = j++;
AVStream *out_stream = avformat_new_stream(ofmt_ctx, NULL);
if (!out_stream) {
ret = AVERROR_UNKNOWN;
goto end;
}
ret = avcodec_parameters_copy(out_stream->codecpar, in_codecpar);
if (ret < 0) {
print_error("Failed to copy codec parameters", ret);
goto end;
}
out_stream->codecpar->codec_tag = 0;
// Set the time base to that of the input stream (to avoid timestamp errors)
out_stream->time_base = in_stream->time_base;
}
// Open the output file (if required by the output format)
if (!(ofmt_ctx->oformat->flags & AVFMT_NOFILE)) {
#ifdef _WIN32
int size_needed = MultiByteToWideChar(CP_UTF8, 0, playlistUrl, -1, NULL, 0);
wchar_t *wPlaylistUrl = (wchar_t *) malloc(size_needed * sizeof(wchar_t));
if (wPlaylistUrl) {
MultiByteToWideChar(CP_UTF8, 0, playlistUrl, -1, wPlaylistUrl, size_needed);
ret = avio_open(&ofmt_ctx->pb, wPlaylistUrl, AVIO_FLAG_WRITE);
free(wPlaylistUrl);
}
#else
ret = avio_open(&ofmt_ctx->pb, playlistUrl, AVIO_FLAG_WRITE);
#endif
if (ret < 0) {
print_error("Unable to open output file", ret);
goto end;
}
}
// Write the header (the output muxer will generate TS files and update the playlist based on the provided options)
ret = avformat_write_header(ofmt_ctx, &opts);
if (ret < 0) {
print_error("Failed to write header", ret);
goto end;
}
// Start reading packets from the input file and writing them to the output muxer (the hls muxer automatically handles real-time segmentation)
while (av_read_frame(ifmt_ctx, &pkt) >= 0) {
if (pkt.stream_index >= ifmt_ctx->nb_streams ||
stream_mapping[pkt.stream_index] < 0) {
av_packet_unref(&pkt);
continue;
}
// Retrieve input and output streams
AVStream *in_stream = ifmt_ctx->streams[pkt.stream_index];
AVStream *out_stream = ofmt_ctx->streams[stream_mapping[pkt.stream_index]];
// Convert timestamps
pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base,
AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX);
pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base,
AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX);
pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
pkt.pos = -1;
pkt.stream_index = stream_mapping[pkt.stream_index];
ret = av_interleaved_write_frame(ofmt_ctx, &pkt);
if (ret < 0) {
print_error("Failed to write packet", ret);
break;
}
av_packet_unref(&pkt);
}
// Note: Not calling av_write_trailer will prevent the muxer from writing EXT-X-ENDLIST, thus allowing subsequent appending
// av_write_trailer(ofmt_ctx);
end:
if (stream_mapping)
av_freep(&stream_mapping);
if (ifmt_ctx)
avformat_close_input(&ifmt_ctx);
if (ofmt_ctx) {
if (!(ofmt_ctx->oformat->flags & AVFMT_NOFILE))
avio_closep(&ofmt_ctx->pb);
avformat_free_context(ofmt_ctx);
}
av_dict_free(&opts);
// Release JNI strings
(*env)->ReleaseStringUTFChars(env, playlistUrlJ, playlistUrl);
(*env)->ReleaseStringUTFChars(env, inputMp4PathJ, inputMp4Path);
(*env)->ReleaseStringUTFChars(env, tsPatternJ, tsPattern);
if (ret < 0) {
return (*env)->NewStringUTF(env, "HLS segmentation failed");
} else {
char resultMsg[256] = {0};
snprintf(resultMsg, sizeof(resultMsg), "HLS segmentation successful: generated new segments and appended to playlist %s", playlistUrl);
return (*env)->NewStringUTF(env, resultMsg);
}
}
代码解释
1. 头文件包含与平台适配
JNI 与标准库包含
包含了 JNI、标准 C 库以及 FFmpeg 相关头文件,用于实现 JNI 接口和媒体处理。#include "com_litongjava_media_NativeMedia.h" #include <jni.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <stdint.h>
平台适配
对于 Windows 平台,包含了<windows.h>
以及后续代码中需要使用的宽字符转换函数。#ifdef _WIN32 #include <windows.h> #endif
FFmpeg 头文件
包含了 FFmpeg 中用于封装格式处理、通用工具函数以及选项设置的头文件。#include <libavformat/avformat.h> #include <libavutil/avutil.h> #include <libavutil/opt.h>
2. 辅助函数:print_error
作用
该函数利用av_strerror
将 FFmpeg 返回的错误码转换为易读的字符串,并通过fprintf
将错误信息输出到标准错误流。实现
定义一个固定大小的字符数组errbuf
,调用av_strerror
填充错误描述,然后打印错误提示信息。static void print_error(const char *msg, int errnum) { char errbuf[128] = {0}; av_strerror(errnum, errbuf, sizeof(errbuf)); fprintf(stderr, "%s: %s\n", msg, errbuf); }
3. JNI 方法 Java_com_litongjava_media_NativeMedia_splitMp4ToHLS
的整体流程
函数声明与参数说明
该 JNI 方法对应 Java 层的声明,用于将 MP4 文件拆分成 HLS 片段。playlistUrlJ
:播放列表文件路径(或 URL)。inputMp4PathJ
:输入 MP4 文件路径。tsPatternJ
:TS 片段文件的命名模板(含目录)。sceneIndex
:片段起始编号。segmentDuration
:每个 TS 片段的时长。
实现步骤概述
- 检查播放列表是否已存在
#EXT-X-ENDLIST
标签,防止追加操作。 - 使用 FFmpeg 打开输入文件并获取流信息。
- 分配输出上下文,使用 HLS muxer,并配置相应的 HLS 选项。
- 为每个有效的输入流创建一个对应的输出流,并复制编解码参数。
- 打开输出文件,写入 muxer 头信息。
- 逐个读取输入数据包,调整时间戳后写入输出 muxer,实现实时拆分。
- 最后释放所有资源,并返回操作结果。
- 检查播放列表是否已存在
4. 获取 JNI 字符串参数
- 使用 JNI 接口
GetStringUTFChars
将 Java 字符串转换为 C 语言的 UTF-8 编码字符串,方便后续操作。const char *playlistUrl = (*env)->GetStringUTFChars(env, playlistUrlJ, NULL); const char *inputMp4Path = (*env)->GetStringUTFChars(env, inputMp4PathJ, NULL); const char *tsPattern = (*env)->GetStringUTFChars(env, tsPatternJ, NULL);
5. 检查播放列表文件
- 目的
避免对已包含#EXT-X-ENDLIST
标签的播放列表进行追加写入。 - 实现
- 根据平台(Windows 或其他)以正确的方式打开文件。
- 读取整个文件内容,并检查是否包含
#EXT-X-ENDLIST
字符串。 - 如果检测到该标签,则释放已分配资源并返回错误提示,阻止后续操作。
// 检查播放列表文件中是否已包含 EXT-X-ENDLIST(不允许追加) { FILE *checkFile = NULL; // 针对 Windows 平台的处理 #ifdef _WIN32 // 使用宽字符打开文件 ... #else checkFile = fopen(playlistUrl, "r"); #endif if (checkFile) { ... if (strstr(content, "#EXT-X-ENDLIST") != NULL) { ... return (*env)->NewStringUTF(env, "Playlist already contains #EXT-X-ENDLIST, cannot append"); } ... } }
6. 打开输入文件与获取流信息
打开文件
使用avformat_open_input
打开指定的 MP4 文件。if ((ret = avformat_open_input(&ifmt_ctx, inputMp4Path, NULL, NULL)) < 0) { print_error("Unable to open input file", ret); goto end; }
获取流信息
使用avformat_find_stream_info
分析并获取文件中的所有流信息,便于后续创建输出流。if ((ret = avformat_find_stream_info(ifmt_ctx, NULL)) < 0) { print_error("Unable to retrieve stream info", ret); goto end; }
7. 分配输出上下文与设置 HLS 选项
分配输出上下文
利用avformat_alloc_output_context2
创建一个新的输出上下文,指定使用hls
muxer,输出路径为播放列表文件路径。ret = avformat_alloc_output_context2(&ofmt_ctx, NULL, "hls", playlistUrl); if (ret < 0 || !ofmt_ctx) { print_error("Unable to allocate output context", ret); goto end; }
设置 HLS 选项
通过av_dict_set
为 muxer 设置相关参数:hls_time
:指定每个 TS 分段的时长。start_number
:指定分段文件的起始编号。hls_segment_filename
:指定 TS 文件的命名模板。hls_flags
、hls_list_size
、hls_playlist_type
:配置追加模式、无限播放列表及事件类型(防止写入结束标签)。
{ char seg_time_str[16] = {0}; snprintf(seg_time_str, sizeof(seg_time_str), "%d", segmentDuration); av_dict_set(&opts, "hls_time", seg_time_str, 0); char start_num_str[16] = {0}; snprintf(start_num_str, sizeof(start_num_str), "%d", sceneIndex); av_dict_set(&opts, "start_number", start_num_str, 0); av_dict_set(&opts, "hls_segment_filename", tsPattern, 0); av_dict_set(&opts, "hls_flags", "append_list", 0); av_dict_set(&opts, "hls_list_size", "0", 0); av_dict_set(&opts, "hls_playlist_type", "event", 0); }
8. 创建输出流与复制编解码参数
- 建立流映射
为每个输入流创建一个映射数组stream_mapping
,过滤掉不需要复制的流(如非视频、音频、字幕流)。 - 新建输出流
对于每个有效的输入流:- 通过
avformat_new_stream
在输出上下文中创建新的流。 - 使用
avcodec_parameters_copy
复制输入流的编解码参数到输出流。 - 设置输出流的时间基准与输入流一致,以保证时间戳的正确转换。
for (int i = 0, j = 0; i < ifmt_ctx->nb_streams; i++) { AVStream *in_stream = ifmt_ctx->streams[i]; AVCodecParameters *in_codecpar = in_stream->codecpar; if (in_codecpar->codec_type != AVMEDIA_TYPE_VIDEO && in_codecpar->codec_type != AVMEDIA_TYPE_AUDIO && in_codecpar->codec_type != AVMEDIA_TYPE_SUBTITLE) { stream_mapping[i] = -1; continue; } stream_mapping[i] = j++; AVStream *out_stream = avformat_new_stream(ofmt_ctx, NULL); if (!out_stream) { ret = AVERROR_UNKNOWN; goto end; } ret = avcodec_parameters_copy(out_stream->codecpar, in_codecpar); if (ret < 0) { print_error("Failed to copy codec parameters", ret); goto end; } out_stream->codecpar->codec_tag = 0; out_stream->time_base = in_stream->time_base; }
- 通过
9. 打开输出文件并写入文件头
打开输出文件
判断输出格式是否需要显式打开文件(部分 muxer 不需要),并调用avio_open
打开目标文件。- 针对 Windows 系统,进行宽字符转换后调用
_wfopen
打开文件。
if (!(ofmt_ctx->oformat->flags & AVFMT_NOFILE)) { #ifdef _WIN32 // Windows 平台处理宽字符 ... #else ret = avio_open(&ofmt_ctx->pb, playlistUrl, AVIO_FLAG_WRITE); #endif if (ret < 0) { print_error("Unable to open output file", ret); goto end; } }
- 针对 Windows 系统,进行宽字符转换后调用
写入文件头
调用avformat_write_header
写入文件头信息,此时 muxer 根据前面设置的选项生成 TS 分段文件并更新播放列表。ret = avformat_write_header(ofmt_ctx, &opts); if (ret < 0) { print_error("Failed to write header", ret); goto end; }
10. 读取数据包并写入输出
- 数据包读取与过滤
循环调用av_read_frame
从输入文件中读取数据包。- 过滤掉不在流映射中的数据包(例如被标记为不需要复制的流)。
- 时间戳转换
为避免时间戳错误,使用av_rescale_q_rnd
将数据包的 pts、dts 与 duration 从输入流时间基转换到输出流时间基。pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX); pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX); pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
- 写入数据包
调用av_interleaved_write_frame
写入转换后的数据包到输出上下文。遇到写入错误时打印错误信息并中断循环。ret = av_interleaved_write_frame(ofmt_ctx, &pkt); if (ret < 0) { print_error("Failed to write packet", ret); break; }
11. Trailer 写入与资源释放
Trailer 写入
注释中提到不调用av_write_trailer
是为了避免 muxer 写入EXT-X-ENDLIST
,这样可以允许后续追加 TS 分段到播放列表。// Note: Not calling av_write_trailer will prevent the muxer from writing EXT-X-ENDLIST, thus allowing subsequent appending // av_write_trailer(ofmt_ctx);
资源释放
通过调用相应的释放函数释放:- 流映射数组 (
av_freep
) - 输入上下文 (
avformat_close_input
) - 输出上下文及其关联的 I/O 上下文 (
avio_closep
和avformat_free_context
) - 字典选项 (
av_dict_free
) - JNI 字符串 (
ReleaseStringUTFChars
)
确保没有内存泄露。
- 流映射数组 (
12. 返回操作结果
- 错误处理
如果在操作过程中遇到错误,则返回一个描述 “HLS segmentation failed” 的字符串。 - 成功返回
如果一切顺利,则返回成功信息,并在消息中包含播放列表的路径。if (ret < 0) { return (*env)->NewStringUTF(env, "HLS segmentation failed"); } else { char resultMsg[256] = {0}; snprintf(resultMsg, sizeof(resultMsg), "HLS segmentation successful: generated new segments and appended to playlist %s", playlistUrl); return (*env)->NewStringUTF(env, resultMsg); }
总结
该代码实现了一个 JNI 接口,利用 FFmpeg 库将 MP4 文件转换为 HLS 格式,分割成多个 TS 文件,并动态更新播放列表。核心流程包括:
- 检查播放列表状态;
- 打开并解析输入文件;
- 根据输入流创建输出流,并复制相关参数;
- 设置 HLS 相关的参数(分段时长、起始编号、命名模板等);
- 实时读取、转换和写入数据包;
- 故意不写入 Trailer 以便后续追加片段;
- 最后返回操作结果。
这样的设计适用于需要动态生成 HLS 流、支持断点续传或实时追加分段的场景。