Service Platform

Technologies supporting new media content lifecycles
The EXPERIMEDIA Service Platform consists of a set of media services that have been instrumented for deep levels of observability for use within experimentation and technology trials. What this means is that each service has a corresponding service model with QoS metrics that are reported and available to the customer during experimentation. Such detailed metrics are necessary for customers to explore the relationship between QoS and QoE. These types of metrics are typically not available from equivalent commercial services. In addition a semantic provenance model is offered that allows user centric activities and interactions to be tracked and linked to detailed metrics. This capability is important to allow customers to track users in open studies and to explore correlations between QoE, system interaction and system performance.
experimediaserviceplatformAccess to the service platform is achieved by a self-service interface that allows customers to propose an experiment and select the services they require by venue. Each venue has a specific preconfigured service bundle that is a subset of the overall platform offering. Where the customer selects any venue a customisable list is provided that allows them to select the services which they need.
The platform is designed to support the lifecycle of different types of content. EXPERIMEDIA groups content into five areas as described below.

Experiment Content


Experiment content is produced and consumed by developers performing tests on Multimedia systems to understand and gain insight into structure, behaviour and performance. System configuration, system dependency graphs, input/out data sets, testing procedures and monitoring data all characterise experiment content.
Read more

Audio Visual Content


Audio visual content is primarily characterised by video and metadata streamed and consumed by applications (i.e. players). AV content is produced by professionals and users using content production, management and content distribution networks. The AVCC offers capabilities for all aspects of the content lifecycle (acquisition, production, transcoding, distribution, etc) and advanced capabilities for acquisition and synchronisation between cameras feeds, audio and metadata, including matching exact frames from different cameras. Read more

Pervasive Content


Pervasive content is produced by mobile users and sensors located in real-world environments. Human sensing (e.g. biomechanics, physiology, etc), human location tracking (indoors and outdoors), location-based content, real-world community interaction models, environment sensing, points of interest all characterise pervasive content.
Read more

Social Content


Social content is characterised by user generated content produced by and consumed within online communities. Photos, videos, comments and opinion is disseminated by individuals to related friends using social networking platforms.
Read more

3D Content


The 3D Content Component (3DCC) supports experimenters in acquiring and manipulating 3D information from depth sensing devices (e.g. the Kinect). Various types of information can be acquired: raw depth images, human skeleton, as well as low resolution RGB images registered to the depth images. Moreover the 3DCC can provide functionalities to manipulate this information to suit the experimenters’ particular needs. Finally, avatar editing and avatar interactive motion is provided by the 3DCC.
Read more