Wednesday, July 4, 2007

Digital Rights Management (DRM)

The use of mobile and portable devices to download and use content raises the
issues of rights management. The originator of the content, whether music, images,
movies or games, will have spent a lot of time and money developing that content
and will want to exert some control over the future use of that content. This is
where DRM systems can be applied to limit the use, and therefore prevent misuse
of content.

There are a large number of proprietary DRM systems at present, which reflects
the growing availability of digital content over the Internet and other networks,
and is indicative of the lack of standards in this area. The Open Mobile Alliance
(OMA) has worked on enablers for DRM systems aimed at mobile handsets and
other portable devices, and many handset manufacturers now support OMA DRM
but may also support proprietary techniques as the market dictates. DRM systems
are built around a trusted entity known as a DRM agent (or client), which in the
case of a mobile network resides in the handset.

The DRM client is able to download content and the rights to use that content.
The content and rights can be kept separate or can be delivered in a combined format.
Rights are a mixture of permissions and constraints that indicate what can
be done with the associated content.

For example, content may be valid for 30 days, or until the end of the month;
the number of plays may be limited; etc. It is the responsibility of the DRM agent
to apply the rights to the content and to track its usage, so that if, for example, it
is a song that can only be played 30 times, then the content cannot be used unless
new rights are obtained.

OMA release 1 DRM included a number of basic features, which were forward
lock (preventing content forwarding), combined delivery, and separate delivery.
Separate delivery, where the content and rights are kept separate, supports something
known as super-distribution, whereby users are able to distribute content at
will, but not the rights for that content. This can be very useful where the content
has some sort of preview mode that can be used to encourage recipients to acquire
their own rights to that content.

Press-to-Talk

Press-to-Talk (PTT) service is a real-time, point-to-point and point-to-multipoint,
voice-based instant messaging application. This has proved a popular service in the
United States on the Nextel network, based on Motorola’s iDEN architecture. Many
of the features are the same as text-based IM, with buddy lists and chat rooms, etc.
Once again, the biggest potential problem affecting the widespread adoption of the
service will be compatibility between handsets and across networks.

There are, at present, four PTT solutions on the market.

1. Motorola iDEN. This is a well-established system that has a proven track
record in the United States with good performance. Motorola also supplies
a wide range of handsets to support the service. However, only Motorola
manufactures handsets for this service. iDEN does not support a presence
service.

2. Kodiak. This system uses circuit-switched connections, which results in good
performance but misses out on the efficiency of packet-based delivery. The
system is also technology-agnostic. It has been deployed on GSM, CDMA,
and analog networks. However, there is a limited range of handsets available
for this service. Kodiak has a thin-client available to allow vendors to implement
the service.

3. Qualcom Qchat. This service is proprietary to Qualcom and is only supported
on cdmaOne 1X only. In this case, the performance of the 1X network may
not be sufficient to ensure adequate performance. The QChat service is implemented
on Qualcomm’s BREW platform.

4. PTT over Cellular (PoC). The final method is a proposed standard mechanism
from Ericsson, Nokia, Motorola, and Siemens for a PTT over Cellular
(PoC) system. This has been presented to the Open Mobile Alliance (OMA),
which is developing this proposal as a standard in line with the 3G UMTS
and IMS specification from the 3GPP. It is based, generally, on existing protocols
and methods: IP and SIP. This system has the advantage that it will
run over any network. There will be products available in GSM and GPRS,
UMTS, and cdma2000. It is anticipated that it will be possible to interoperate
a PTT service across network boundaries.

DVB-H and UMTS

Studies have been conducted in relation to the roll-out of DVB-H and its possible integration
with cellular networks. DVB-H is backward-compatible with DVB-T, and
the two formats can be mixed in the same digital multiplex. While one multiplex can
convey six to eight DVB-T channels, it can be used for up to fifty DVB-H channels.
The DVB technology can be kept separate and used to deliver content to DVBH-
enabled terminals, or a cellular technology, such as UMTS, could be used to
provide a reverse channel that allows users to browse for content and, when content
selection is made, have that content delivered over a DVB-T channel (Figure 4.67).
This would mean that some degree of cooperation would be required between the
UMTS network operator and a broadcaster. The convergence of telecommunications
and entertainment services means that such cooperation is very likely.

Services and Security for Handset

Given the range of mobile device capability in today’s market, there are many types
of service that may be available to the user. Some of these services depend on handset
capability; others rely on the services supported by the network.
Until recently, many services relied on proprietary techniques in the handset
or network, which means that services are often limited to just a certain handset
model or are only available with a particular mobile operator. The process of defining
services by organizations such as 3GPP has led to the standardization of many
services, or the development of standard environments where services can be developed,
deployed, and executed.

Described below are some of the services that can be seen on handsets that
are available today. All of these services have benefited from the process of
standardization.

Support for Location Technologies
There are a number of techniques for providing location information for handsets.
Some of these require involvement of the handset in the location process; others use
data that is already collected by the handset during normal modes and operation
and do not place additional processing requirements on the handset.
The least accurate location technique uses either Timing Advance (TA) in GSM
or Cell Identity (Cell ID) in UMTS. When a mobile connects to the network (in
dedicated mode), the TA or Cell ID is known and can be reported to a location
server and used to deliver some form of location service to the user. More accurate
location information can be obtained using a triangulation process, wherein the
handset measures information from multiple base stations and uses this, along with
data obtained from the network, to calculate the true time differences of the signals
it has received.

In GSM, this technique is known as Observed Time Difference of Arrival
(OTDOA). In UMTS, the equivalent process is Enhanced-Observed
Time Difference (E-OTD). In both cases, the handset measures the time difference
of signals between base stations; it also receives information about the real-time difference
(either sent by broadcast messages or to one specific mobile) and it calculates
from these a geometric time difference. This geometric time difference is the result
of location and can be reported to the network to deliver location services. This
calculation requires some additional processing in the handset.

The final location mechanism is potentially the most accurate, and involves integrating
a GPS receiver in the handset. When required, the handset
can take GPS measurements and report these to the location servers in the network.
To speed the satellite acquisition process, the network can broadcast assistance
data, which tells handsets in a particular area which satellites are visible. GPS-based
location has the largest impact on the mobile because of the GPS receiver and the
processing that is required to resolve position information. It is possible to combine
location techniques; thus, for example, when GPS fails to give a fix because the
handset is inside a building, the location information could be obtained from the
time-difference techniques.


Mobile TV Reception
The multimedia capabilities of handsets have raised the possibility of delivering
television to mobile or handheld devices, and indeed a number of services have
been launched that allow users to download short video clips and even video ringtones
onto their phones.

However, these services are using the bearers or channels in the 2.5G or 3G
network, which in many cases may not have the bandwidth or the format to support
high-quality video transmission. There is also an impact on all other services.
Because video has a relatively high bandwidth, it will limit the capacity for all the
other services delivered by the network.

A possible solution is the use of a technology known as Digital Video Broadcast-
Handheld (DVB-H). DVB-H is a derivative of the main DVB Terrestrial (DVBT)
format, which includes a number of features targeted at delivering content to
mobile devices.

To minimize power consumption in a DVB-H receiver, the information sent
to the handheld device is time-sliced. That is, it is delivered in concentrated bursts
so the receiver is not switched on all the time. This reduces power consumption by
up to 95 percent compared to DVB-T. In addition, the DVB-H standard includes
a number of features aimed at supporting user mobility, such as seamless handovers
between DVB transmitters. The signal format is able to accommodate users moving
at several hundred kilometers per hour.

Mobile Operating Systems

The role of an operating system (OS) within a handset or handheld device is no different
than the OS deployed in computing terminals; the major differences in the
OS between the two environments are the result of handset constraints.

The OS is responsible for a range of tasks, which include management of the
processor, memory, and devices. Processor management determines
when an application can use the central processor and how to manage the resources
when multiple processes have to operate simultaneously. Memory management
allocates memory to processes so that they do not overlap and controls the reading
and writing of data to memory locations. Additionally, the OS will look after storage
of data, perhaps on a card or even a disk, and will also manage devices, or the
input/output (I/O) capabilities. The user interface (UI) is considered part of the OS
although not all OS include a UI that allows licensees to customize a UI to their
own design.

Operating above the OS will in most cases be a series of applications; and to
support these, the OS will have an application programming interface (API), which
abstracts the functionality of the OS for application developers.

In the context of a mobile handset, an OS has a set of limitations placed on it
that are the result of the processor capabilities and limited memory. Therefore, the
OS in these cases needs a very small footprint, which means very efficient code
writing. There are broadly two approaches to writing an OS for mobiles. The first
is to develop an OS from the ground up, specifically for the mobile environment.
The second is to take an OS that is perhaps used in desktop devices and produce a
compact version.

One critical area for mobile OS coders is reliability. The end user of a mobile
device would not tolerate systems crashes and lockups. This means not only reliability,
but also robustness as the underlying connectivity between the device and
the network is error-prone.

The range of handset OSs in today’s market includes completely closed or proprietary
systems through to open platforms (Figure 4.50). There are many variants
in between these two ends of the scale, where developers are able to create content
for a particular OS without having to know its technical details.

Proprietary Operating Systems
Many handset products have an OS that is proprietary, and in many cases the
details of the OS are unavailable, even perhaps to developers. However, the popularity
of smart phones, with comprehensive operating systems, and a recognition
that content developers can generate revenues for carriers, has meant that even
proprietary OS have a degree of openness associated with them.

For example, a proprietary OS will often include Java functionality, which
offers developers a route to content production through standardized, well-published
APIs; meanwhile, the core of the OS that drives the phone functionality
remains hidden. Handset vendors that have moved along this path will generally
offer developer platforms, software development kits (SDKs) and training, documentation,
and support options

Universal Serial Bus (USB), Bluetooth, Bluetooth Profiles

The USB initiative was an attempt by the IT industry to provide a simple, standardized
interface that could support many applications and was capable of plugand-
play operation. The outcome of this was the definition of the USB connection.
There are two major versions of USB: version 1.1, which supports
data rates of 1.5 Mbps (low-speed) and 12 Mbps (full-speed), was supplemented
recently by USB version 2.0, which supports 480 Mbps (high-speed).
USB is electrically and mechanically a very simple interface with data lines
and power connections that allows a USB host device to provide power to connected
devices. Although the USB standard has been modified over time to include
the possibility of USB ports on mobile phones and similar devices, many handset
vendors depart from the USB standard when it comes to the physical connection
of USB on the phone.

The standard USB connection is too large for mobile devices; and although a
small form-factor version is now in the standard, many handset vendors choose to
use a proprietary physical connection for the USB on their handset ranges. This
means that end users will require a manufacturer-specific cable to connect their
handsets to other USB devices.

Commonly, the USB port found on handsets operates at full-speed (12 Mbps),
with the handset acting as a USB device; some handsets support version 1.1 whereas

Bluetooth
Bluetooth is a radio-based connection option that aims to solve some of the issues
addressed by IrDA, in particular the number of different cables that users require
to interconnect the multitude of terminals they own.

To overcome some of the limitations of IrDA, Bluetooth operates in the 2.4-
GHz, Industrial, Scientific and Medical (ISM) band. The advantage
of this band is that it is license exempt, which means radio equipment operating in
this band can do so without users requiring operating licenses. However, to support
the coexistence of many radio applications in the band, it is regulated in terms of
usable power levels and spectrum parameters.

Bluetooth was developed by the telecommunications industry, so the initial focus
of the standard was to provide a means for mobile handsets to interconnect with
associated devices, such as headsets, PDAs, and laptop computers. However, because
of its ready availability, Bluetooth is finding its way into other consumer products.
The Bluetooth radio component operates across up to 79 channels in the 2.4-
GHz band and, to mitigate interference, frequency hops around these channels at
a rate of 1600 hops per second. It should be noted that the full range of Bluetooth
channels is not available in all countries because of local regulatory constraints. The
power classes defined for Bluetooth devices support typical ranges up to 10 meters,
although the class 1 devices at 20 dBm can achieve ranges greater than 100 meters.
more recent handsets support version 2.0.

Bluetooth Profiles
As the Bluetooth specifications were written, it became obvious that there were
numerous real-world applications for the technology and that it would be unrealistic
to include each and every one of these in the standards. Therefore, Bluetooth
is based on the concept of a series of defined profiles, where a profile specifies how
the Bluetooth protocols should operate to provide a set of functions. The profiles
can be viewed as a series of building blocks from which real applications can be
constructed. New profiles can be added to the Bluetooth standard after completing
an agreement process.

An example of the relationship between a usage model and profiles is the 3-
in-1 phone. The 3-in-1 phone has three operational modes: (1) it is able to act as a
normal mobile handset and access the cellular network; (2) it can use Bluetooth to
access a gateway device attached to a landline (therefore acting as a cordless phone);
and (3) it is able to connect directly to other handsets using Bluetooth (therefore
acting as an intercom device). This usage model is based on two Bluetooth profiles:
(1) the Cordless Telephony Profile (CTP) and (2) the Intercom Profile (IP)
Outside the profiles that support applications, there are two generic Bluetooth
profiles, known as the Generic Access Profile (GAP) and the Service Discovery
Application Profile (SDAP).

GAP concerns the discovery procedures that allow Bluetooth devices to find
one another, and includes functions for establishing a Bluetooth connection and
optionally adding security. SDAP is used by one Bluetooth device to discover what
services are offered by a remote Bluetooth device.

Although there are a number of WLAN standards on the market, the dominant
series are those produced by the Institute of Electrical and Electronic Engineers
(IEEE). The IEEE project 802 oversees standards for all LAN technologies,
both wired and wireless, and the working group 802.11 is responsible for WLAN
standards. IEEE 802.11 has defined three radio technologies for WLAN, known
as 802.11b, 802.11a, and 802.11g, and Draft 802.11n. These differ in terms of the
data rates they support and the spectrum band in which they operate. Two of the
WLAN standards are designed to operate in the 2.4-GHz ISM band, while the
third, 802.11a, operates in bands around 5 GHz.

The preexistence of radar systems at 5 GHz in Europe means that 802.11a systems
cannot be deployed in this region without suitable modification. Additional
interference mitigation techniques were added by the 802.11h standard, which
adapted 802.11a radio for use under European regulations.

The term “WiFi” (wireless fidelity) is often applied to 802.11 systems. WiFi is actually
a brand name that belongs to an industry association, the WiFi Alliance, whose role
is to test interoperability of WLAN products. Any device that carries the
WiFi mark will have been tested against a baseline implementation of the standards
and has been demonstrated to operate with products from other manufacturers.

Video Coding

Although reducing the number of pixels in an image reduces the bit rate of a video
signal quite dramatically, further reductions are necessary if video is to be supported
over the limited bandwidth channels of a cellular network. For most applications,
the number of frames per second can also be reduced from the standard 50
or 60 frames per second used in broadcast systems to a figure between 10 and 30
frames per second. Frame rates as low as 10 frames per second will be acceptable for
some applications, such as videoconferencing or videocalling.

Further reductions in bit rate are achieved by employing coding or compression
techniques. Although there are a number of techniques on the market for processing
video, they have similarities in terms of concept, using a combination of spatial and
temporal compression (Figure 4.36). Spatial compression techniques analyze redundancy
within a frame, produced for example by a large number of adjacent pixels all
having the same or similar levels of brightness (luminance) and color (chrominance).
This redundancy can then be removed by coding. In a similar fashion, temporal
compression looks for redundancy between adjacent frames; this is often the result
of an image background, for example, that does not change significantly between
one frame and the next. Again, this redundancy can be removed by coding.
As the power of electronic processors, particularly digital signal processors
(DSPs), has improved, video codecs have been designed that are able to offer equivalent
quality to their predecessors but at reduced bit rates.

Coding for still images is based on the same spatial compression as used for
video; there is no need to apply temporal compression to a single image. However,
the different still image formats are better suited to one type of image or another.
For example, JPEG works well with black and white or color natural images (such
as photographs), whereas GIF works better for black and white images that contain
lines and blocks (such as cartoons).

There is also a distinction between coding that is lossy and coding that is lossless.
The video coding techniques described here and JPEG for still images are all
classified as lossy in that they remove information through coding that cannot be
regenerated later. On the other hand, GIF is a lossless coding technique and subsequently
does not remove information through its coding.

Video Coding Standards
A number of video coding standards exist in commercial applications and two
main groups have worked on these standards: (1) the International Standards Organization
(ISO) and (2) the International Telecommunication Union – Telecommunications
branch (ITU-T). The ISO is responsible for the Moving Pictures Expert
Group (MPEG) that has produced a series of video coding systems (Figure 4.37).
The first MPEG standard, MPEG-1, was released in 1992 and was aimed at
providing acceptable, but sub-broadcast, quality video that could be used for CDs
and games. The video coding produced an output at 150 kbps. The audio coding
portion of the MPEG-1 standard included three coding options offering progressively
greater compression rates for the audio component. The third of these
options, MPEG-1 Layer 3 or simply MP3, has become a dominant standard for the
distribution of music and audio over the Internet.

In 1994, MPEG-2 was released, and offered improvements on the MPEG-1
standard. MPEG-2 can work at a variety of bit rates, but at 1 to 3 Mbps outperforms
the quality of MPEG-1. MPEG-2 is used for digital versatile disks (DVDs)
and digital video broadcast (DVB), and includes a range of audio coding options
such as advanced audio coding (AAC).

The most recent addition to the MPEG family, MPEG-4 was originally designed
for low bit services, with channels operating sub-64 kbps but can also be employed at
high bit rates into the Mbps range. The design intention was to provide video coding
for some of the “new” applications that were appearing, such as streaming services