XWikiTube
- Feature
- Completed
Description
The XWikiTube application has been implemented within the UCF project, see the XWikiTube Application which only requires ffmpeg to be installed on the server.
The XWikiTube application will allow users to manage their videos on an XWiki plateforme. XWikiTube will offer the following features :
Video upload
The user can upload videos directly in the wiki, the upload can be done within a customized input (macro), from the "XWikiTube app with minutes main page" or directly from the wysiwyg as done with images.
The upload process will take in consideration XWiki limitations concerning attachments size by saving the videos on the file system.
Upload using the Video Upload macro
The macro will displays the input box for uploading the video.
The macro can be excuted with some parameters :
- videoFormats : A string parameter that contains a comma separated video formats (allowed video formats).
- startTranscodingAfterUpload : A boolean parameter that indicates if the trancoding will started after the end of the upload, or will be be scheduled later from the UI.
Upload from the XWikiTube AWM main page
The XWikiTube AWM is an application that list and display information on all the uploaded videos in the wiki.
The upload process is the same as the Video Upload macro.
Upload from the Wysiwyg
The upload is done directly from the wysiwyg as done with images.
Transcoding video
Aplying a transcoding process after video upload. Video files are transcoded into the required format and codecs to enable web publishing in adaptive qualities and on multiple devices (including different mobile and iOS devices), while supporting all popular media codecs.
Each source file could be transcoded into multiple flavors – i.e. files with different bitrates, dimensions, and quality. During playback the player selects the most appropriate flavor for playback based on the viewer's available bandwidth, player dimensions, and CPU usage.
In the first version we will focus on the encodings related to newtwork capablities and resolutions.
Transcoding video workflow
The conversion of the video will starts according to 2 different ways
1) Directly after the video upoad
2) Scheduled from the UI
The trancoding can be done by using special libraries and APIs. One of famous libraries is ffmpeg
The transcoding process is composed of 2 steps
Step 1: Create the video and audio streams
For each video source we create a set of video and audio streams, each with varying resolutions and bit rates.
For example, let’s assume there is an initial video source named “input_video.y4m” and an initial audio source named “input_audio.wav”. We create 5 video streams and 1 audio stream each with varying resolutions and bit rates.
Video and audio Streams:
- video_160x90_250k.webm
- video_320x180_500k.webm
- video_640x360_750k.webm
- video_640x360_1000k.webm
- video_1280x720_500k.webm
- audio_128k.webm
Step 2: Create the DASH Manifest
DASH Manifest is an XML file (usually with the extension .mpd). It can be created using ffmpeg library.
What is DASH ?
Dynamique adaptive streaming over HTTP (DASH), also known as MPEG-DASH is an adaptive bitrate streaming technique that enables high quality streaming of media content over the Internet delivered from conventional HTTP web servers.
DASH uses existing HTTP web server infrastructure that is used for delivery of essentially all World Wide Web content. It allows devices like Internet-connected televisions, TV set-top boxes, desktop computers, smartphones, tablets, etc. to consume multimedia content (video, TV, radio, etc.) delivered via the Internet, coping with variable Internet receiving conditions.
Adaptive bitrate streaming is a technique used in streaming multimedia over computer networks. While in the past most video streaming technologies utilized streaming protocols such as RTP with RTSP, today's adaptive streaming technologies are almost exclusively based on HTTP and designed to work efficiently over large distributed HTTP networks such as the Internet.
It works by detecting a user's bandwidth and CPU capacity in real time and adjusting the quality of a video stream accordingly. It requires the use of an encoder which can encode a single source video at multiple bit rates(bit rate is the number of bits that are conveyed or processed per unit of time). The player client switches between streaming the different encodings depending on available resources. "The result: very little buffering, fast start time and a good experience for both high-end and low-end connections."
More specifically, and as the implementations in use today are, adaptive bitrate streaming is a method of video streaming over HTTP where the source content is encoded at multiple bit rates, then each of the different bit rate streams are segmented into small multi-second parts. The streaming client is made aware of the available streams at differing bit rates, and segments of the streams by a manifest file. When starting, the client requests the segments from the lowest bit rate stream. If the client finds the download speed is greater than the bit rate of the segment downloaded, then it will request the next higher bit rate segments. Later, if the client finds the download speed for a segment is lower than the bit rate for the segment, and therefore the network throughput has deteriorated, then it will request a lower bit rate segment. The segment size can vary depending on the particular implementation, but they are typically between two (2) and ten (10) seconds.
Adaptive streaming overview
Adaptive streaming in action
Bite rate
In digital multimedia, bitrate represents the amount of information, or detail, that is stored per unit of time of a recording. The bitrate depends on several factors:
- The original material may be sampled at different frequencies.
- The samples may use different numbers of bits.
- The data may be encoded by different schemes.
- The information may be digitally compressed by different algorithms or to different degrees.
Generally, choices are made about the above factors in order to achieve the desired trade-off between minimizing the bitrate and maximizing the quality of the material when it is played.
As example here are some video bit rates
- 16 kbit/s – videophone quality (minimum necessary for a consumer-acceptable "talking head" picture using various video compression schemes)
- 128–384 kbit/s – business-oriented videoconferencing quality using video compression
- 400 kbit/s YouTube 240p videos (using H.264)
- 1 Mbit/s YouTube 480p videos (using H.264)
- 2.5 Mbit/s YouTube 720p videos (using H.264)
- 3.5 Mbit/s typ – Standard-definition television quality (with bit-rate reduction from MPEG-2 compression)
- 4.5 Mbit/s YouTube 1080p videos (using H.264)
- 9.8 Mbit/s max – DVD (using MPEG2 compression)
- 8 to 15 Mbit/s typ – HDTV quality (with bit-rate reduction from MPEG-4 AVC compression)
- 19 Mbit/s approximate – HDV 720p (using MPEG2 compression)
- 24 Mbit/s max – AVCHD (using MPEG4 AVC compression)
- 25 Mbit/s approximate – HDV 1080i (using MPEG2 compression)
...
Benefits of adaptive bitrate streaming
Traditional server-driven adaptive bitrate streaming provides consumers of streaming media with the best-possible experience, since the media server automatically adapts to any changes in each user's network and playback conditions. The media and entertainment industry also benefit from adaptive bitrate streaming. As the video space grows, content delivery networks and video providers can provide customers with a superior viewing experience. Adaptive bitrate technology requires additional encoding, but simplifies the overall workflow and creates better results.
HTTP-based adaptive bitrate streaming technologies yield additional benefits over traditional server-driven adaptive bitrate streaming. First, since the streaming technology is built on top of HTTP, contrary to RTP-based adaptive streaming, the packets have no difficulties traversing firewall and NAT devices. Second, since HTTP streaming is purely client-driven, all adaptation logic resides at the client. This reduces the requirement of persistent connections between server and client application. Furthermore the server is not required to maintain session state information on each client, increasing scalability. Finally, existing HTTP delivery infrastructure, such as HTTP caches and servers can be seamlessly adopted.
A scalable CDN is used to deliver media streaming to an Internet audience. The CDN receives the stream from the source at its Origin server, then replicates it to many or all of its Edge cache servers. The end-user requests the stream and is redirected to the "closest" Edge server. This can be tested using libdash and the Distributed DASH (D-DASH) dataset,which has several mirrors across Europe, Asia and the US. The use of HTTP-based adaptive streaming allows the Edge server to run a simple HTTP server software, whose licence cost is cheap or free, reducing software licencing cost, compared to costly media server licences (e.g. Adobe Flash Media Streaming Server). The CDN cost for HTTP streaming media is then similar to HTTP web caching CDN cost.
Use case : Stream and playback WebM files using DASH
The use case includes the following steps
1) Creating WebM files for DASH
To create WebM files for DASH we use the ffmpeg library (http://ffmpeg.org/).
You will need to download the latest (tip-of-the-tree) version of FFmpeg in order for some DASH features to work. You can either download nightly static build from https://www.ffmpeg.org/download.html or build FFmpeg yourself from the git repository.
Use FFMpeg to encode some of your own videos to WebM files.
WebM is an open media file format designed for the web. WebM files consist of video streams compressed with the VP8 or VP9 video codec, audio streams compressed with the Vorbis or Opus audio codecs.
2) Create the WebM DASH Manifest
WebM DASH Manifest is an XML file (usually with the extension .mpd). It can be created using ffmpeg.
The manifest file is file that DASH javascript player uses to playback the video.
3) Stream and playback WebM files on Web using Dash.js
An option for streaming WebM files adaptively on the Web, is Dash.js, which is also an open source media player built on HTML5.
Videos delivering and streaming
Ensures the best video delivery performance possible, even when working with high volumes of content and audience.
After the transcoding of a video we will need to stream the generated files adaptively. We will use the XWiki Video Macro that implments the Video.js library which supports DASH.
XWikiTube Main page
This page is the main page of XWikiTube application, it will list all the uploaded videos.
For each source video all related information will be displayed:
- Video format
- Author (User who download the video)
- Video size
- The status of the transcoding process
- Files generated after transcoding (Video streams, manifest file ...)
From this page we cab also manage videos, there is some actions that can be executed by the user :
- Play the video : Will show a preview of the video
- Encode video : This will axcute the transcoding process and generates the related files
- Delete the video : delete the source video and all it's related files.
Videos edition and manipulation (V2 features)
XWikiTube will provides core capabilities of video manipulation, including: thumbnail generation, image cropping and resizing, video trimming, video transitions, video overlaying (annotations), video effects, audio and voice control, video speed control and more.