en-US.all-Media_Server_User_Guide.xml Maven / Gradle / Ivy
<?xml version="1.0" encoding="utf-8"?> <!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" [ <!ENTITY % BOOK_ENTITIES SYSTEM "Media_Server_User_Guide.ent"> %BOOK_ENTITIES; <!ENTITY % BOOK_ENTITIES SYSTEM "Media_Server_User_Guide.ent"> <!-- NOTES ON CHANGING ENTITY FILES Important: take care when changing entities for directories * Linux path separators must be: "/" * Windows path separators must be: "\" * Entities which represent directories MUST HAVE a final path separator: my/path/ <- (final separator) If this final separator is missing, then the documentation will be wrong. --><!-- Common entities: same across all books except for BOOKID --><!ENTITY PRODUCT "JBoss Communications Platform"> <!ENTITY BOOKID "Media_Server_User_Guide"> <!ENTITY YEAR "2009"> <!ENTITY HOLDER "Red Hat Inc"> <!-- Shared: Configuring the JBOSS_HOME Environment Variable --><!ENTITY MOB_JBOSS_HOME_LIN "mobicents-all-1.2.1.GA-jboss-4.2.3.GA/jboss/"> <!ENTITY JBCP_JBOSS_HOME_LIN "jboss-eap-4.3/jboss-as/"> <!-- Platform Installation Guide --><!ENTITY PIG_MOB_PLAT_VERSION "1.2.1 GA"> <!ENTITY PIG_MOB_PLAT_SIZE "200 MB"> <!ENTITY PIG_MOB_PLAT_ZIP "mobicents-all-1.2.1.GA-jboss-4.2.3.GA.zip"> <!ENTITY PIG_MOB_JBOSS_HOME_BIN_LIN "jboss-4.2.3.GA/bin/"> <!ENTITY PIG_MOB_JBOSS_HOME_BIN_WIN "jboss-4.2.3.GA\bin\"> <!ENTITY PIG_JBCP_PLAT_VERSION "1.2.1"> <!ENTITY PIG_JBCP_PLAT_SIZE "310 MB"> <!ENTITY PIG_JBCP_PLAT_ZIP "JBCP-1.2.1.GA-jboss-eap-4.3.zip"> <!ENTITY PIG_JBCP_JBOSS_HOME_BIN_LIN "jboss-eap-4.3/jboss-as/bin/"> <!ENTITY PIG_JBCP_JBOSS_HOME_BIN_WIN "jboss-eap-4.3\jboss-as\bin\"> <!-- JAIN SLEE Server User Guide --><!ENTITY JSS_VERSION "1.2.5 GA"> <!ENTITY JSS_SIZE "100 MB"> <!ENTITY JSS_ZIP "mobicents-jainslee-server-1.2.5.GA-jboss-4.2.3.GA.zip"> <!ENTITY JSS_JBOSS_HOME "jboss-4.2.3.GA"> <!ENTITY JSS_JBOSS_HOME_BIN_LIN "jboss-4.2.3.GA/bin/"> <!ENTITY JSS_JBOSS_HOME_BIN_WIN "jboss-4.2.3.GA\bin/"> <!-- Media Server User Guide --><!ENTITY MS_VERSION "1.1.0 GA"> <!ENTITY MS_SIZE "110 MB"> <!ENTITY MS_ZIP "mobicents-media-server-all-1.1.0.GA.zip"> <!ENTITY MS_JBOSS_HOME_BIN_LIN "jboss-4.2.3.GA/bin/"> <!ENTITY MS_JBOSS_HOME_BIN_WIN "jboss-4.2.3.GA\bin\"> <!-- SIP Servlet Server Installation Guide --><!ENTITY SSS_MSS4J_VERSION "1.0.0"> <!ENTITY SSS_MSS4J_SIZE "135 MB"> <!ENTITY SSS_MSS4J_ZIP "mss-1.0.0-jboss-4.2.3.GA-0904211307.zip"> <!ENTITY SSS_MSS4T_VERSION "0.5"> <!ENTITY SSS_MSS4T_SIZE "20 MB"> <!ENTITY SSS_MSS4T_ZIP "mss-1.0.0.GA-apache-tomcat-6.0.14-0904211257.zip"> <!-- SIP Presence Service User Guide --><!ENTITY SPS_INT_VERSION "1.0.0 BETA4"> <!ENTITY SPS_INT_SIZE "90 MB"> <!ENTITY SPS_INT_ZIP "mobicents-sip-presence-integrated-1.0.0.BETA4-CP1.zip"> <!ENTITY SPS_INT_JBOSS_HOME_BIN_LIN "bin/"> <!ENTITY SPS_INT_JBOSS_HOME_BIN_WIN "bin\"> <!ENTITY SPS_XDMS_VERSION "1.0.0 BETA4 CP1"> <!ENTITY SPS_XDMS_SIZE "90 MB"> <!ENTITY SPS_XDMS_ZIP "mobicents-sip-presence-xdms-1.0.0.BETA4-CP1.zip"> <!ENTITY SPS_XDMS_JBOSS_HOME_BIN_LIN "bin/"> <!ENTITY SPS_XDMS_JBOSS_HOME_BIN_WIN "bin\"> ]> <book lang="en-US"> <bookinfo id="msug-Media_Server_User_Guide" lang="en-US"> <title>Media Server User Guide</title> <subtitle>The <application>Mobicents Platform</application> Media Server Gateway Guide</subtitle> <productname>Mobicents Platform</productname> <productnumber>1.2.1</productnumber> <edition>2.0</edition> <pubsnumber>1</pubsnumber> <abstract> <para><application>The Mobicents Platform</application> is the first and only open source <acronym>VoIP</acronym> platform certified for <acronym>JAIN SLEE</acronym> 1.0 and <acronym>SIP</acronym> Servlets 1.1 compliance. <application>Mobicents</application> serves as a high-performance core for Service Delivery Platforms (<acronym>SDP</acronym>s) and <acronym>IP</acronym> Multimedia Subsystems (<acronym>IMS</acronym>s) by leveraging <acronym>J2EE</acronym> to enable the convergence of data and video in Next-Generation Intelligent Network (<acronym>NGIN</acronym>) applications.</para> <para>The <application>Mobicents Platform</application> enables the composition of predefined Service Building Blocks (<acronym>SBB</acronym>s) such as Call-Control, Billing, User-Provisioning, Administration and Presence-Sensing. Out-of-the-box monitoring and management of <application>Mobicents</application> components is achieved through <acronym>JMX</acronym> Consoles. <acronym>JSLEE</acronym> allows popular protocol stacks such as <acronym>SIP</acronym> to be plugged in as Resource Adapters (<acronym>RA</acronym>s), and Service Building Blocks—which share many similarities with <acronym>EJB</acronym>s—allow the easy accommodation and integration of enterprise applications with end points such as the Web, Customer Relationship Management (<acronym>CRM</acronym>) systems and Service-Oriented Architectures (<acronym>SOA</acronym>s). The <application>Mobicents Platform</application> is the natural choice for telecommunication Operations Support Systems (OSSs) and Network Management Systems (NMSs).</para> <para>In addition to the telecommunication industry, the <application>Mobicents Platform</application> is suitable for a variety of problem domains demanding an Event-Driven Architecture (<acronym>EDA</acronym>) for high-volume, low-latency signaling, such as financial trading, online gaming, (<acronym>RFID</acronym>) sensor network integration, and distributed control.</para></abstract> <corpauthor> <inlinemediaobject> <imageobject> <imagedata fileref="Common_Content/images/title_logo.svg" format="SVG"/> </imageobject> <textobject> <phrase>Logo</phrase> </textobject> </inlinemediaobject> </corpauthor> <copyright> <year>2009</year> <holder>Red Hat Inc</holder> </copyright> <!-- ORIGINAL: <xi:include href="Common_Content/Legal_Notice.xml" xmlns:xi="http://www.w3.org/2001/XInclude" /> --><!-- FOR JDOCBOOK: --><!-- <xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="Common_Content/Legal_Notice.xml"> <xi:fallback xmlns:xi="http://www.w3.org/2001/XInclude"> <xi:include href="fallback_content/Legal_Notice.xml" xmlns:xi="http://www.w3.org/2001/XInclude"></xi:include> </xi:fallback> </xi:include> --><authorgroup lang="en-US"> <author> <firstname>Jared</firstname> <surname>Morgan</surname> <affiliation> <orgname>Red Hat, </orgname> <orgdiv>Engineering Content Services</orgdiv> </affiliation> <email condition="mobicents">[email protected]</email> </author> <author> <firstname>Douglas</firstname> <surname>Silas</surname> <affiliation> <orgname>Red Hat, </orgname> <orgdiv>Engineering Content Services</orgdiv> </affiliation> <email condition="mobicents">[email protected]</email> </author> <author> <firstname>Ivelin</firstname> <surname>Ivanov</surname> <affiliation> <orgname>Red Hat, </orgname> <orgdiv condition="mob">Mobicents</orgdiv> </affiliation> <email condition="mob">[email protected]</email> </author> <author> <firstname>Vladimir</firstname> <surname>Ralev</surname> <affiliation> <orgname>Red Hat, </orgname> <orgdiv condition="mob">Mobicents</orgdiv> </affiliation> <email condition="mob">[email protected]</email> </author> <author> <firstname>Eduardo</firstname> <surname>Martins</surname> <affiliation> <orgname>Red Hat, </orgname> <orgdiv condition="mob">Mobicents</orgdiv> </affiliation> <email condition="mob">[email protected]</email> </author> <author> <firstname>Jean</firstname> <surname>Deruelle</surname> <affiliation> <orgname>Red Hat, </orgname> <orgdiv condition="mob">Mobicents</orgdiv> </affiliation> <email condition="mob">[email protected]</email> </author> <author> <firstname>Oleg</firstname> <surname>Kulikov</surname> <affiliation> <orgname>Red Hat, </orgname> <orgdiv condition="mob">Mobicents</orgdiv> </affiliation> <email condition="mob">[email protected]</email> </author> <author> <firstname>Amit</firstname> <surname>Bhayani</surname> <affiliation> <orgname>Red Hat, </orgname> <orgdiv condition="mob">Mobicents</orgdiv> </affiliation> <email condition="mob">[email protected]</email> </author> <author> <firstname>Luis</firstname> <surname>Barreiro</surname> <affiliation> <orgname>Red Hat, </orgname> <orgdiv condition="mob">Mobicents</orgdiv> </affiliation> <email condition="mob">[email protected]</email> </author> <author> <firstname>Alexandre</firstname> <surname>Mendonça</surname> <affiliation> <orgname>Red Hat, </orgname> <orgdiv condition="mob">Mobicents</orgdiv> </affiliation> <email condition="mob">[email protected]</email> </author> <author> <firstname>Bartosz</firstname> <surname>Baranowski</surname> <affiliation> <orgname>Red Hat, </orgname> <orgdiv condition="mob">Mobicents</orgdiv> </affiliation> <email condition="mob">[email protected]</email> </author> <author> <firstname>Pavel</firstname> <surname>Šlégr</surname> <affiliation> <orgname>Red Hat, </orgname> <orgdiv condition="mob">Mobicents</orgdiv> </affiliation> <email condition="mob">[email protected]</email> </author> <author> <firstname>Yulian</firstname> <surname>OIfa</surname> <affiliation> <orgname>Red Hat, </orgname> <orgdiv condition="mob">Mobicents</orgdiv> </affiliation> <email condition="mob">[email protected]</email> </author> </authorgroup> </bookinfo> <preface lang="en-US"> <title>Preface</title> <para> </para> <section lang="en-US" xml:base="Common_Content/Conventions.xml"> <title>Document Conventions</title> <para> This manual uses several conventions to highlight certain words and phrases and draw attention to specific pieces of information. </para> <para> In PDF and paper editions, this manual uses typefaces drawn from the <ulink url="https://fedorahosted.org/liberation-fonts/">Liberation Fonts</ulink> set. The Liberation Fonts set is also used in HTML editions if the set is installed on your system. If not, alternative but equivalent typefaces are displayed. Note: Red Hat Enterprise Linux 5 and later includes the Liberation Fonts set by default. </para> <section> <title>Typographic Conventions</title> <para> Four typographic conventions are used to call attention to specific words and phrases. These conventions, and the circumstances they apply to, are as follows. </para> <para> <literal>Mono-spaced Bold</literal> </para> <para> Used to highlight system input, including shell commands, file names and paths. Also used to highlight key caps and key-combinations. For example: </para> <blockquote> <para> To see the contents of the file <filename>my_next_bestselling_novel</filename> in your current working directory, enter the <command>cat my_next_bestselling_novel</command> command at the shell prompt and press <keycap>Enter</keycap> to execute the command. </para> </blockquote> <para> The above includes a file name, a shell command and a key cap, all presented in Mono-spaced Bold and all distinguishable thanks to context. </para> <para> Key-combinations can be distinguished from key caps by the hyphen connecting each part of a key-combination. For example: </para> <blockquote> <para> Press <keycap>Enter</keycap> to execute the command. </para> <para> Press <keycombo><keycap>Ctrl</keycap><keycap>Alt</keycap><keycap>F1</keycap></keycombo> to switch to the first virtual terminal. Press <keycombo><keycap>Ctrl</keycap><keycap>Alt</keycap><keycap>F7</keycap></keycombo> to return to your X-Windows session. </para> </blockquote> <para> The first sentence highlights the particular key cap to press. The second highlights two sets of three key caps, each set pressed simultaneously. </para> <para> If source code is discussed, class names, methods, functions, variable names and returned values mentioned within a paragraph will be presented as above, in <literal>Mono-spaced Bold</literal>. For example: </para> <blockquote> <para> File-related classes include <classname>filesystem</classname> for file systems, <classname>file</classname> for files, and <classname>dir</classname> for directories. Each class has its own associated set of permissions. </para> </blockquote> <para> <application>Proportional Bold</application> </para> <para> This denotes words or phrases encountered on a system, including application names; dialogue box text; labelled buttons; check-box and radio button labels; menu titles and sub-menu titles. For example: </para> <blockquote> <para> Choose <guimenu>System > Preferences > Mouse</guimenu> from the main menu bar to launch <application>Mouse Preferences</application>. In the <guilabel>Buttons</guilabel> tab, click the <guilabel>Left-handed mouse</guilabel> check box and click <guibutton>Close</guibutton> to switch the primary mouse button from the left to the right (making the mouse suitable for use in the left hand). </para> <para> To insert a special character into a <application>gedit</application> file, choose <guimenu>Applications > Accessories > Character Map</guimenu> from the main menu bar. Next, choose <guimenu>Search > Find…</guimenu> from the <application>Character Map</application> menu bar, type the name of the character in the <guilabel>Search</guilabel> field and click <guibutton>Next</guibutton>. The character you sought will be highlighted in the <guilabel>Character Table</guilabel>. Double-click this highlighted character to place it in the <guilabel>Text to copy</guilabel> field and then click the <guibutton>Copy</guibutton> button. Now switch back to your document and choose <guimenu>Edit > Paste</guimenu> from the <application>gedit</application> menu bar. </para> </blockquote> <para> The above text includes application names; system-wide menu names and items; application-specific menu names; and buttons and text found within a GUI interface, all presented in Proportional Bold and all distinguishable by context. </para> <para> Note the <guimenu>></guimenu> shorthand used to indicate traversal through a menu and its sub-menus. This is to avoid the difficult-to-follow 'Select <guimenuitem>Mouse</guimenuitem> from the <guimenu>Preferences</guimenu> sub-menu in the <guimenu>System</guimenu> menu of the main menu bar' approach. </para> <para> <command><replaceable>Mono-spaced Bold Italic</replaceable></command> or <application><replaceable>Proportional Bold Italic</replaceable></application> </para> <para> Whether Mono-spaced Bold or Proportional Bold, the addition of Italics indicates replaceable or variable text. Italics denotes text you do not input literally or displayed text that changes depending on circumstance. For example: </para> <blockquote> <para> To connect to a remote machine using ssh, type <command>ssh <replaceable>username</replaceable>@<replaceable>domain.name</replaceable></command> at a shell prompt. If the remote machine is <filename>example.com</filename> and your username on that machine is john, type <command>ssh [email protected]</command>. </para> <para> The <command>mount -o remount <replaceable>file-system</replaceable></command> command remounts the named file system. For example, to remount the <filename>/home</filename> file system, the command is <command>mount -o remount /home</command>. </para> <para> To see the version of a currently installed package, use the <command>rpm -q <replaceable>package</replaceable></command> command. It will return a result as follows: <command><replaceable>package-version-release</replaceable></command>. </para> </blockquote> <para> Note the words in bold italics above — username, domain.name, file-system, package, version and release. Each word is a placeholder, either for text you enter when issuing a command or for text displayed by the system. </para> <para> Aside from standard usage for presenting the title of a work, italics denotes the first use of a new and important term. For example: </para> <blockquote> <para> When the Apache HTTP Server accepts requests, it dispatches child processes or threads to handle them. This group of child processes or threads is known as a <firstterm>server-pool</firstterm>. Under Apache HTTP Server 2.0, the responsibility for creating and maintaining these server-pools has been abstracted to a group of modules called <firstterm>Multi-Processing Modules</firstterm> (<firstterm>MPMs</firstterm>). Unlike other modules, only one module from the MPM group can be loaded by the Apache HTTP Server. </para> </blockquote> </section> <section> <title>Pull-quote Conventions</title> <para> Two, commonly multi-line, data types are set off visually from the surrounding text. </para> <para> Output sent to a terminal is set in <computeroutput>Mono-spaced Roman</computeroutput> and presented thus: </para> <screen> books Desktop documentation drafts mss photos stuff svn books_tests Desktop1 downloads images notes scripts svgs </screen> <para> Source-code listings are also set in <computeroutput>Mono-spaced Roman</computeroutput> but are presented and highlighted as follows: </para> <programlisting language="Java"> package org.jboss.book.jca.ex1; import javax.naming.InitialContext; public class ExClient { public static void main(String args[]) throws Exception { InitialContext iniCtx = new InitialContext(); Object ref = iniCtx.lookup("EchoBean"); EchoHome home = (EchoHome) ref; Echo echo = home.create(); System.out.println("Created Echo"); System.out.println("Echo.echo('Hello') = " + echo.echo("Hello")); } } </programlisting> </section> <section> <title>Notes and Warnings</title> <para> Finally, we use three visual styles to draw attention to information that might otherwise be overlooked. </para> <note> <title>Note</title> <para> A note is a tip or shortcut or alternative approach to the task at hand. Ignoring a note should have no negative consequences, but you might miss out on a trick that makes your life easier. </para> </note> <important> <title>Important</title> <para> Important boxes detail things that are easily missed: configuration changes that only apply to the current session, or services that need restarting before an update will apply. Ignoring Important boxes won't cause data loss but may cause irritation and frustration. </para> </important> <warning> <title>Warning</title> <para> A Warning should not be ignored. Ignoring warnings will most likely cause data loss. </para> </warning> </section> </section> <!-- <xi:include href="Feedback.xml" xmlns:xi="http://www.w3.org/2001/XInclude"> <xi:fallback xmlns:xi="http://www.w3.org/2001/XInclude"> <xi:include href="Common_Content/Feedback.xml" xmlns:xi="http://www.w3.org/2001/XInclude" /> </xi:fallback> </xi:include> --> </preface> <chapter id="ittms-Introduction_to_the_Media_Server" lang="en-US"> <!-- chapter id nickname: ittms --><title>Introduction to the Mobicents Media Server</title> <section id="ittms-Overview-the_Reasoning_and_Need_for_Media_Servers"> <title>Overview: the Reasoning and Need for Media Servers</title> <formalpara> <title>Media Gateways Bridge Multiple Technologies</title> <para> Today, all communications can be routed through computers. Widespread access to broadband Internet and the ubiquity of Internet Protocol (<acronym>IP</acronym>) enable the convergence of voice, data and video. Media gateways provide the ability to switch voice media between a network and its access point. Using Digital Subscriber Line (<acronym>DSL</acronym>) and fast-Internet cable technology, a media gateway converts, compresses and packetizes voice data for transmission back-and-forth across the Internet backbone for landline and wireless phones. Media gateways sit at the intersection of Public Switched Telephone Networks (<acronym>PSTN</acronym>s) and wireless or IP-based networks. </para> </formalpara> <formalpara> <title>The Justification for Media Gateways for VoIP</title> <para> Multiple market demands are pushing companies to converge all of their media services using media gateways with Voice-over-IP (<acronym>VoIP</acronym>) capabilities. Companies have expectations for such architectures, which include: </para> </formalpara> <variablelist> <varlistentry> <term>Lowering initial costs</term> <listitem> <para> Capital investment is decreased because low-cost commodity hardware can be used for multiple functions. </para> </listitem> </varlistentry> <varlistentry> <term>Lowering development costs</term> <listitem> <para> Open system hardware and software standards with well-defined applications reduce costs, and Application Programming Interfaces (<acronym>API</acronym>s) accelerate development. </para> </listitem> </varlistentry> <varlistentry> <term>Handling multiple media types</term> <listitem> <para> Companies want <acronym>VoIP</acronym> solutions today, but also need to choose extensible solutions that will handle video in the near future. </para> </listitem> </varlistentry> <varlistentry> <term>Lowering the costs of deployment and maintenance</term> <listitem> <para> Standardized, modular systems reduce training costs and maintenance while simultaneously improving uptime. </para> </listitem> </varlistentry> <varlistentry> <term>Enabling rapid time-to-market</term> <listitem> <para> Early market entry hits the window of opportunity and maximizes revenue. </para> </listitem> </varlistentry> </variablelist> <formalpara> <title>What Is the Mobicents Media Server?</title> <para> The Mobicents Media Gateway is an open source Media Server aimed at: </para> </formalpara> <itemizedlist> <listitem> <para> Delivering competitive, complete, best-of-breed media gateway functionality of the highest quality. </para> </listitem> <listitem> <para> Meeting the demands of converged wireless and landline networks, DSL and cable broadband access, and fixed-mobile converged <acronym>VoIP</acronym>—— networks from a singleand singularly-capablemedia gateway platform. </para> </listitem> <listitem> <para> Increasing flexibility with a media gateway that supports a wide variety of call control protocols, which possesses an architecture that can scale to meet the demands of small-carrier providers as well as large enterprises. </para> </listitem> </itemizedlist> </section> <section id="ittms-Media_Server_Architecture"> <title>Media Server Architecture</title> <para> Media services have played an important role in the traditional Time Division Multiplexing (<acronym>TDM</acronym>)-based telephone network. As the network migrates to an Internet Protocol (<acronym>IP</acronym>)-based environment, media services are also moving to new environments. </para> <para> One of the most exciting trends is the emergence and adoption of complementary modular standards that leverage the Internet to enable media services to be developed, deployed and updated more rapidly than before in a network architecture that supports the two concepts called <emphasis>provisioning-on-demand</emphasis> and <emphasis>scaling-on-demand</emphasis>. </para> <section id="ittms-Design_Overview"> <title>Design Overview</title> <formalpara> <title>Base Architecture</title> <para> The Media Server is developed using the JBoss Microcontainer kernel. The JBoss Microcontainer provides the following major capabilities: <itemizedlist> <listitem> <para> Configure, and Deploy, Plain Old Java Objects (POJOs) into a Java Standard Edition (SE) runtime environment. </para> </listitem> <listitem> <para> Manage the lifecycle of applications used with the Media Server. </para> </listitem> <listitem> <para> Convert internal, fixed subsystems into stand-alone POJOs. </para> </listitem> <listitem> <para> Introduces the Service Provider Interface (SPI) throughout the base server. </para> </listitem> </itemizedlist> </para> </formalpara> <mediaobject id="ittms-mms-MMSArchitecture-dia-MSGeneral"> <imageobject> <imagedata align="center" fileref="images/mms-MMSArchitecture-dia-MSGeneral.jpg" format="JPG" width="405"/> </imageobject> </mediaobject> <para> The Media Server's high degree of modularity benefits the application developer in several ways. The already-tight code can be further optimized to support applications that require small footprints. For example, if <acronym>PSTN</acronym> interconnection is unnecessary in an application, then the D-channel feature can be removed from the Media Server. In the future, if the same application is deployed within a Signaling System 7 (<acronym>SS7</acronym>) network, then the appropriate endpoint can be enabled, and the application is then compatible. </para> <mediaobject id="ittms-mms-MMSArchictecture-dia-MMS"> <imageobject> <imagedata align="center" fileref="images/mms-MMSArchictecture-dia-MMS.png" format="PNG" scalefit="1" width="550"/> </imageobject> </mediaobject> <para> The Media Server architecture assumes that call control intelligence lies outside of the Media Server, and is handled by an external entity. The Media Server also assumes that call controllers will use control procedures such as <acronym>MGCP</acronym>, Mecago or <acronym>MSML</acronym>, among others. Each specific control module can be plugged in directly to the server as a standard deployable unit. Utilizing the JBoss Microcontainer for the implementation of control protocol-specific communication logic allows for simple deployment. It is therefore unnecessary for developers to configure low-level transaction and state management details, multi-threading, connection-pooling and other low-level details and <acronym>API</acronym>s. </para> <note> <para> The Media Server uses <acronym>SLEE</acronym> for implementing its own communication capabilities. The <acronym>SLEE</acronym> container does not serve here as a call controller. </para> </note> <para> In addition to control protocol modules, the <acronym>SLEE</acronym> container is aimed at providing high-level features like Interactive Voice Response (<acronym>IVR</acronym>), the Drools business rule management system, and VoiceXML engines. </para> <para> The modules deployed under <acronym>SLEE</acronym> control interact with the Media Server's Service Provider Interface (<acronym>SPI</acronym>) through the Media Server Control Resource Adapter, or <acronym>MSC-RA</acronym>. The <acronym>MSC-RA</acronym> follows the recommendations of <ulink url="http://jcp.org/en/jsr/detail?id=309">JSR-309</ulink> and implements asynchronous interconnection with the Media Server <acronym>SPI</acronym> stack. This local implementation is restricted and does not use high-level abstractions (for example, VoiceXML dialogs). </para> <formalpara> <title>Media Flow Path</title> <para> Service Objects are used to represent the media flow path for media objects used with the Media Server. By implementing Service Objects to manage components, constructing media services can be separated into two areas: <itemizedlist> <listitem> <para> Implementing components that generate, or consume, media data. </para> </listitem> <listitem> <para> Assembling media component chains to build a media flow path. </para> </listitem> </itemizedlist> </para> </formalpara> <para> Media Components consist of a number of sub-components </para> <formalpara> <title>Media Flow Path</title> <para> Service Objects are used to represent the media flow path for media objects used with the Media Server. By implementing Service Objects to manage components, constructing media services can be separated into two areas: <itemizedlist> <listitem> <para> Implementing components that generate, or consume, media data. </para> </listitem> <listitem> <para> Assembling media component chains to build a media flow path. </para> </listitem> </itemizedlist> </para> </formalpara> <section id="ittms-Typical_Deployment_Scenario"> <title>Typical Deployment Scenario</title> <para> The Media Server offers a complete media gateway and server solution; here is a non-exhaustive list of the MMS's capabilities: </para> <itemizedlist> <listitem> <para> Digital Signal Processing to convert and compress <acronym>TDB</acronym> voice circuits into IP packets. </para> </listitem> <listitem> <para> Announcement access points. </para> </listitem> <listitem> <para> Conferencing. </para> </listitem> <listitem> <para> High-level Interactive Voice Response (<acronym>IVR</acronym>) engines. </para> </listitem> </itemizedlist> <para> The gateway is able to provide signaling conversation and can operate as a Session Border Controller at the boundaries of Local Access Networks (<acronym>LAN</acronym>s). The Media Server is always controlled by an external <application condition="mob">Mobicents Platform</application> application server, which implements the call control logic. </para> <mediaobject id="ittms-mms-MMSArchitecture-MSDeployment"> <imageobject> <imagedata align="left" fileref="images/mms-MMSArchitecture-MSDeployment.gif" format="GIF" scalefit="1" width="535"/> </imageobject> <caption> <para> Typical Media Server Deployment Scenario </para> </caption> </mediaobject> </section> </section> <section id="ittms-Endpoints"> <title>Endpoints</title> <formalpara> <title>Endpoints</title> <para> It is convenient to consider a media gateway as a collection of endpoints. An endpoint is a logical representation of a physical entity such as an analog phone or a channel in a trunk. Endpoints are sources or sinks of data and can be either physical or virtual. Physical endpoint creation requires hardware installation, while software is sufficient for creating virtual endpoints. An interface on a gateway that terminates at a trunk connected to a <acronym>PTSN</acronym> switch would be an example of a physical endpoint. An audio source in an audio content server would be an example of a virtual endpoint. </para> </formalpara> <para> The type of the endpoint determines its functionality. Our analysis, so far, has led us to isolate the following basic endpoint types: </para> <itemizedlist> <listitem> <para> digital signal 0 (<acronym>DS0</acronym>) </para> </listitem> <listitem> <para> analog line </para> </listitem> <listitem> <para> announcement server access point </para> </listitem> <listitem> <para> conference bridge access point </para> </listitem> <listitem> <para> packet relay </para> </listitem> <listitem> <para> Asynchronous Transfer Mode (<acronym>ATM</acronym>) "trunk side" interface </para> </listitem> </itemizedlist> <para> This list is not final: other endpoint types may be defined in the future, such as test endpoints which could be used to check network quality, or frame-relay endpoints that could be used to manage audio channels multiplexed over a frame-relay virtual circuit. </para> <variablelist> <title>Descriptions of Various Access Point Types</title> <varlistentry> <term>Announcement Server Access Point</term> <listitem> <para> An announcement server endpoint provides access, intuitively, to an announcement server. Upon receiving requests from the call agent, the announcement server <quote>plays</quote> a specified announcement. A given announcement endpoint is not expected to support more than one connection at a time. Connections to an announcement server are typically one-way; they are <quote>half-duplex</quote>: the announcement server is not expected to listen to audio signals from the connection. Announcement access points are capable of playing announcements; however, these endpoints do not have the capability of transcoding. To achieve transcoding, a Packet Relay must be used. Also note that the announcement server endpoint can generate tones, such as dual-tone multi-frequency (DTMF). </para> </listitem> </varlistentry> <varlistentry> <term>Interactive Voice Response Access Point</term> <listitem> <para> An Interactive Voice Response (<acronym>IVR</acronym>) endpoint provides access to an <acronym>IVR</acronym> service. Upon requests from the call agent, the <acronym>IVR</acronym> server <quote>plays</quote> announcements and tones, and <quote>listens</quote> for responses, such as (<acronym>DTMF</acronym>) input or voice messages, from the user. A given <acronym>IVR</acronym> endpoint is not expected to support more than one connection at a time. Similarly to announcement endpoints, IVR endpoints do not possess media-transcoding capabilities. IVR plays and records in the format in which the media was stored or received. </para> </listitem> </varlistentry> <varlistentry> <term>Conference Bridge Access Point</term> <listitem> <para> A conference bridge endpoint is used to provide access to a specific conference. Media gateways should be able to establish several connections between the endpoint and packet networks, or between the endpoint and other endpoints in the same gateway. The signals originating from these connections are mixed according to the connection <quote>mode</quote> (as specified later in this document). The precise number of connections that an endpoint supports is characteristic of the gateway, and may, in fact, vary according to the allocation of resources within the gateway. </para> </listitem> </varlistentry> <varlistentry> <term>Packet Relay Endpoint</term> <listitem> <para> A packet relay endpoint is a specific form of conference bridge that typically only supports two connections. Packet relays can be found in firewalls between a protected and an open network, or in transcoding servers used to provide interoperation between incompatible gateways, such as gateways which don't support compatible compression algorithms and gateways which operate over different transmission networks, such as IP or ATM. </para> </listitem> </varlistentry> <varlistentry> <term>Echo Endpoint</term> <listitem> <para> An echo—or loopback—endpoint is a test endpoint that is used for maintenance and/or continuity testing. The endpoint returns the incoming audio signal from the endpoint back to that same endpoint, thus creating an echo effect </para> </listitem> </varlistentry> </variablelist> <formalpara> <title>Signal Generators (<acronym>SG</acronym>s) and Signal Detectors (<acronym>SD</acronym>s)</title> <para> This endpoint contains a set of resources which provide media-processing functionality. It manages the interconnection of media streams between the resources, and arbitrates the flow of media stream data between them. Media services, also called <emphasis>commands</emphasis>, are invoked by a client application on the endpoint; that endpoint causes the resources to perform the desired services, and directs events sent by the resources to the appropriate client. A primary resource and zero or more secondary resources are included in the endpoint. The primary resource is typically connected to an external media stream, and provides the data from that stream to secondary resources. The secondary resources may process that stream (for example, recording it and/or performing automatic speech recognition on it), or may themselves generate generate media stream data (for example, playing a voice file) which is then transmitted to the primary resource. </para> </formalpara> <mediaobject id="ittms-mms-MMSArchitecture-dia-Endpoint"> <imageobject> <imagedata align="left" fileref="images/mms-MMSArchitecture-dia-Endpoint.gif" format="GIF" width="370"/> </imageobject> </mediaobject> <para> A resource is statically prepared if the preparation takes place at the time of creation. A resource is dynamically prepared if preparation of a particular resource (and its associated media streams) does not occur until it is required by a media operation. Static preparation can lead to less efficient usage of the Media Server's resources, because those resources tend to be allocated for a longer time before use. However, once a resource has been prepared, it is guaranteed to be available for use. Dynamic preparation may utilize resources more efficiently because just-in-time (<acronym>JIT</acronym>) allocation algorithms may be used. </para> <para> An endpoint is divided logically into a Service Provider Interface that is used to implement a specific endpoint, and a management interface, which is used to implement the manageable resources of that endpoint. All endpoints are plugged into the Mobicents <acronym>SLEE</acronym> server by registering each endpoint with the appropriate JBoss Microcontainer. All major endpoints are manageable JBoss Microcontainers which are interconnected through the server. The most effective way to add endpoints to a Media Server is to create the endpoint application within a JBoss Microcontainer. </para> <para> The <acronym>SPI</acronym> layer is an abstraction that endpoint providers must implement in order to enable their media-processing features. An implementation of an <acronym>SPI</acronym> for an endpoint is referred to as an <emphasis>Endpoint Provider</emphasis>. </para> <!-- <mediaobject id="ittms-mms-MMSArchictecture-dia-Endpoint"> <imageobject> <imagedata align="center" width="700" fileref="images/mms-MMSArchictecture-dia-Endpoint.png" format="PNG" /> </imageobject> <caption> <para>EndpointManagementMBean UML Diagram</para> </caption> </mediaobject> --> </section> <section id="ittms-Endpoint_Identifiers"> <title>Endpoint Identifiers</title> <para> An endpoint is identified by its local name. The syntax of the local name depends on the type of endpoint being named. However, the local name for each of these types is naturally hierarchical, beginning with a term that identifies the physical gateway containing the given endpoint, and ending with a term which specifies the individual endpoint concerned. With this in mind, the JNDI naming rules are applied to the endpoint identifiers. </para> </section> <section id="ittms-Controller-Modules"> <title>Controller Modules</title> <para> Controller Modules allows external interfaces to be implemented for the Media Server. Each controller module implements an industry standard control protocol, and uses a generic SPI to control processing components or endpoints. </para> <para> One such controller module is the Media Gateway Control Protocol (MGCP). This controller module is implemented as an internal protocol within a distributed system, and appears to external networks as a single VoIP gateway. The MGCP is composed of a Call Agent, a set of gateways including at least one "media gateway", and a "signalling gateway" (when connecting to an SS7 controlled network). The Call Agent can be distributed over several computer platforms. Each gateway handles the conversion of media signals between circuits and packets. </para> </section> <section id="ittms-Connections"> <title>Connections</title> <para> Connections are created on the call agent on each endpoint that will be involved in the <quote>call</quote>. In the classic example of a connection between two <quote>DS0</quote> endpoints, EP1 and EP2, the call agents controlling the endpoints establish two connections (C1 and C2): </para> <mediaobject id="ittms-mms-MMSArchitecture-dia-MsConnection"> <imageobject> <imagedata align="center" fileref="images/mms-MMSArchitecture-dia-MsConnection.png" format="PNG" scalefit="1" width="440"/> </imageobject> <caption> <para> Media Server Connections </para> </caption> </mediaobject> <para> Each connection is designated locally by a connection identifier, and will be characterized by connection attributes. </para> <formalpara> <title>Resources and Connection Attributes</title> <para> Many types of resources can be associated with a connection, such as specific signal-processing functions or packetization functions. Generally, these resources fall in two categories: </para> </formalpara> <variablelist id="ittms-Two_Types_of_Resources"> <title>Two Types of Resources</title> <varlistentry> <term>Externally-Visible Resources</term> <listitem> <para> Externally-visible resources are ones which affect the format of <quote>the bits on the network</quote>, and must be communicated to the second endpoint involved in the connection. </para> </listitem> </varlistentry> <varlistentry> <term>Internal Resources</term> <listitem> <para> Internal resources are resources which determine which signal is being sent over the connection and how the received signals are processed by the endpoint. </para> </listitem> </varlistentry> </variablelist> <para> The resources allocated to a connection or, more generally, to the handling of the connection, are chosen by the Media Server under instructions from the call agent. The call agent provides these instructions by sending two set of parameters to the Media Server: </para> <itemizedlist id="ittms-Two_Sets_of_Parameters"> <listitem> <para> The local directives instruct the gateway on the choice of resources that should be used for a connection. </para> </listitem> <listitem> <para> When available, the <emphasis>session description</emphasis> is provided by the other end of the connection. </para> </listitem> </itemizedlist> <para> The local directives specify parameters such as the mode of the connection (e.g. send-only, or send-receive), preferred coding or packetization methods, the usage of echo-cancellation or silence suppression, etc. (A more comprehensive and detailed list can be found in the specification of the <literal>LocalConnectionOptions</literal> parameter of the <literal>CreateConnection</literal> command.) For each of these parameters, the call agent can either specify a value, a range of values, or no value at all. This allow various implementations to implement various levels of control, from very tight control where the call agent specifies minute details of the connection-handling, to very loose control, where the call agent only specifies broad guidelines, such as the maximum bandwidth, and lets the gateway select the detailed values itself. </para> <para> Based on the value of the local directives, the gateway determines the resources allocated to the connection. When this is possible, the gateway will choose values that are in line with the remote session description; however, there is no absolute requirement that the parameters will be exactly the same. </para> <para> Once the resource have been allocated, the gateway will compose a <emphasis>session description</emphasis> that describes the way it intends to receive packets. Note that the session description may in some cases present a range of values. For example, if the gateway is ready to accept one of several compression algorithms, it can provide a list of these accepted algorithms. </para> <formalpara id="ittms-Local_Connections_Are_a_Special_Case"> <title>Local Connections Are a Special Case</title> <para> Large gateways include a large number of endpoints which are often of different types. In some networks, we may often have to set up connections between endpoints located within the same gateway. Examples of such connections may be: </para> </formalpara> <itemizedlist id="ittms-Examples_of_Local_Connections"> <listitem> <para> connecting a trunk line to a wiretap device; </para> </listitem> <listitem> <para> connecting a call to an Interactive Voice-Response (<acronym>IVR</acronym>) unit; </para> </listitem> <listitem> <para> connecting a call to a conferencing unit; or, </para> </listitem> <listitem> <para> routing a call from one endpoint to another, something often described as a <emphasis>hairpin</emphasis> connection. </para> </listitem> </itemizedlist> <para> Local connections are much simpler to establish than network connections. In most cases, the connection will be established through a local interconnecting device, such as, for example, a TDM bus. </para> </section> <section id="ittms-Events_and_Signals"> <title>Events and Signals</title> <para> The concept of events and signals is central to the Media Server. A Call Controller may ask to be notified about certain events occurring in an endpoint (for example: off-hook events) by passing an event identifier as a parameter to an endpoint's <function>subscribe()</function> method. </para> <para> A Call Controller may also request certain signals to be applied to an endpoint (for example: a dial-tone) by supplying the identifier of the event as an argument to the endpoint's <function>apply()</function> method. </para> <para> Events and signals are grouped in packages, within which they share the same namespace. which we will refer to as an event identifier. Event identifiers are integer constants. Some of the events may be parametrized with additional data such as with a DTMF mask. </para> <para> Signals are divided into different types depending on their behavior: </para> <variablelist id="ittms-Types_of_Signals"> <title>Types of Signals</title> <varlistentry> <term>On/off (OO)</term> <listitem> <para> Once applied, these signals last until they are turned off. This can only happen as the result of a reboot/restart or a new signal request where the signal is explicitly turned off. Signals of type OO are defined to be idempotent; thus, multiple requests to turn a given OO signal on (or off) are perfectly valid. An On/Off signal could be a visual message-waiting indicator (<acronym>VMWI</acronym>). Once turned on, it <emphasis>must not</emphasis> be turned off until explicitly instructed to by the Call Agent, or as a result of an endpoint restart. In other words, these signals will not turn off as a result of the detection of a requested event. </para> </listitem> </varlistentry> <varlistentry> <term>Time-out (TO)</term> <listitem> <para> Once applied, these signals last until they are either cancelled (by the occurrence of an event or by explicit releasing of signal generator), or a signal-specific period of time has elapsed. A TO signal that times out will generate an <emphasis>operation complete</emphasis> event. If an event occurs prior to the allotted 180 seconds, the signal will, by default, be stopped (the <emphasis>keep signals active</emphasis> action can be used to override this behavior). If the signal is not stopped, the signal will time out, stop, and generate an <emphasis>operation complete</emphasis> event, about which the server controller may or may not have requested to be notified. A TO signal that fails after being started, but before having generated an <emphasis>operation complete</emphasis> event, will generate an <emphasis>operation failure</emphasis> event that includes the name of the signal that failed. Deletion of a connection with an active TO signal will result in such a failure. </para> </listitem> </varlistentry> <varlistentry> <term>Brief (BR)</term> <listitem> <para> The duration of these signals is normally so short that they stop on their own. If a signal stopping event occurs, or a new signal request is applied, a currently active BR signal will not stop. However, any pending BR signals not yet applied will be cancelled (a BR signal becomes pending if a signal request includes a BR signal, and there is already an active BR signal). As an example, a brief tone could be a DTMF digit. If the DTMF digit <literal>1</literal> is currently being played, and a signal stopping event occurs, the <literal>1</literal> would play to completion. If a request to play DTMF digit <literal>2</literal> arrives before DTMF digit <literal>1</literal> finishes playing, DTMF digit <literal>2</literal> would become pending. </para> </listitem> </varlistentry> </variablelist> <para> Signal(s) may be generated on endpoints or on connections. One or more actions such as the following are associated with each event: </para> <itemizedlist id="ittms-Types_of_Actions_Which_Can_Be_Associated_with_Events"> <listitem> <para> notify the event immediately, together with the accumulated list of observed events; </para> </listitem> <listitem> <para> accumulate the event in an event buffer, but do not yet notify; </para> </listitem> <listitem> <para> keep signal(s) active; or </para> </listitem> <listitem> <para> ignore the event. </para> </listitem> </itemizedlist> </section> </section> </chapter> <chapter id="chapter-Installing_the_Media_Server" lang="en-US"> <!-- chapter id nickname: itms --><title>Installing the Media Server</title> <para> The Media Server is a self-contained Java software stack consisting of multiple servers architecturally designed to work together. This server stack includes the JBoss Application Server and the JAIN SLEE Server; both of these required servers are included in the Media Server distribution. </para> <para> The Media Server is available in both binary and source code distributions. The simplest way to get started with the Media Server is to download the ready-to-run binary distribution. Alternatively, the source code for the Media Server can be obtained by checking it out from its repository using the Subversion version control system (<acronym>VCS</acronym>), and then built using the Maven build system. Whereas installing the binary distribution is recommended for most users, obtaining and building the source code is recommended for those who want access to the latest revisions and Media Server capabilities. </para> <bridgehead id="itms-Installing_the_Java_Development_Kit">Installing the Java Development Kit</bridgehead><section lang="en-US"> <!-- chapter id nickname: jdkicar --><title>Java Development Kit: Installing, Configuring and Running</title> <para> The <application condition="mob">Mobicents</application> platform is written in Java. A working Java Runtime Environment (<acronym>JRE</acronym>) or Java Development Kit (<acronym>JDK</acronym>) must be installed prior to running the server. The required versiom must be version 5 or higher. </para> <para> It is possible to run most <application condition="mob">Mobicents</application> servers, such as the JAIN SLEE Server, using a Java 6 JRE or JDK. However, the XML Document Management Server does not run on Java 6. Check the <ulink url="http://groups.google.com/group/mobicents-public/topics">public support forum</ulink> and <ulink url="http://www.mobicents.org/roadmap.html">project roadmap</ulink> pages to verify Java 6 support prior to running the XML Document Management Server with Java 6. </para> <formalpara> <title>JRE or JDK?</title> <para> Although <application condition="mob">Mobicents</application> servers are capable of running on the Java Runtime Environment, this guide assumes the audience is mainly developers interested in developing Java-based, <application condition="mob">Mobicents</application>-driven solutions. Therefore, installing the Java Development Kit is covered due to the anticipated audience requirements. </para> </formalpara> <formalpara> <title>32-Bit or 64-Bit JDK</title> <para> If the system uses 64-Bit Linux or Windows architecure, the 64-bit JDK is strongly recommended over the 32-bit version. The following heuristics should be considered in determining whether the 64-bit Java Virtual Machine (JVM) is suitable: </para> </formalpara> <itemizedlist> <listitem> <para> Wider datapath: the pipe between RAM and CPU is doubled, which improves the performance of memory-bound applications when using a 64-bit JVM. </para> </listitem> <listitem> <para> 64-bit memory addressing provides a virtually unlimited (1 exabyte) heap allocation. Note that large heaps can affect garbage collection. </para> </listitem> <listitem> <para> Applications that run with more than 1.5 GB of RAM (including free space for garbage collection optimization) should utilize the 64-bit JVM. </para> </listitem> <listitem> <para> Applications that run on a 32-bit JVM and do not require more than minimal heap sizes will gain nothing from a 64-bit JVM. Excluding memory issues, 64-bit hardware with the same relative clock speed and architecture is not likely to run Java applications faster than the 32-bit version. </para> </listitem> </itemizedlist> <note> <para> The following instructions describe how to download and install the 32-bit JDK, however the steps are nearly identical for installing the 64-bit version. </para> </note> <!-- <formalpara id="jdkicar-binary-Java_Development_Kit-Pre-Install Requirements and Prerequisites"> <title>Pre-Install Requirements and Prerequisites</title> <para></para> </formalpara> --><!-- <variablelist condition="jdkicar-binary-Java_Development_Kit-Hardware_Requirements"> <title>Hardware Requirements</title> <varlistentry> <term>Sufficient Disk Space</term> <listitem> <para></para> </listitem> </varlistentry> </variablelist> --><!-- <variablelist condition="jdkicar-binary-Java_Development_Kit-Software_Prerequisites"> <title>Software Prerequisites</title> <varlistentry> <term></term> <listitem> <para></para> </listitem> </varlistentry> </variablelist> --> <formalpara> <!-- id="jdkicar-binary-Java_Development_Kit-Downloading"> --><title>Downloading</title> <para> Download the Sun JDK 5.0 (Java 2 Development Kit) from Sun's website: <ulink url="http://java.sun.com/javase/downloads/index_jdk5.jsp"/>. Click the <guilabel>Download</guilabel> link next to "JDK 5.0 Update <replaceable><x></replaceable>" (where <replaceable><x></replaceable> is the latest minor version release number). </para> </formalpara> <para> The Sun website offers two download options: <itemizedlist> <listitem> <para> A self-extracting RPM (for example, <filename>jdk-1_5_0_16-linux-i586-rpm.bin</filename>) </para> </listitem> <listitem> <para> A self-extracting file (e.g. <filename>jdk-1_5_0_16-linux-i586.bin</filename>) </para> </listitem> </itemizedlist> </para> <para> If installing the JDK on Red Hat Enterprise Linux, Fedora, or another RPM-based Linux system, it is recommended that the self-extracting file containing the RPM package is selected. This option will set up and use the SysV service scripts in addition to installing the JDK. The RPM option is also recommended if the <application condition="mob">Mobicents</application> platform is being set up in a production environment. </para> <formalpara> <!-- id="jdkicar-binary-Java_Development_Kit-Installing"> --><title>Installing</title> <para> The following procedures detail how to install the Java Development Kit on both Linux and Windows. </para> </formalpara> <procedure> <title>Installing the JDK on Linux</title> <step> <para> Ensure the file is executable, then run it: </para> <!-- ~]$ chmod +x "jdk-1_5_0_<minor_version>-linux-<architecture>-rpm.bin" ~]$ ./"jdk-1_5_0_<minor_version>-linux-<architecture>-rpm.bin" --> <screen>~]$ chmod +x "jdk-1_5_0_<minor_version>-linux-<architecture>-rpm.bin" ~]$ ./"jdk-1_5_0_<minor_version>-linux-<architecture>-rpm.bin" </screen> </step> </procedure> <note> <title>Setting up SysV Service Scripts for Non-RPM Files</title> <para> If the non-RPM self-extracting file is selected for an RPM-based system, the SysV service scripts can be configured by downloading and installing one of the <literal>-compat</literal> packages from the JPackage project. Download the <literal>-compat</literal> package that corresponds correctly to the minor release number of the installed JDK. The compat packages are available from <ulink url="ftp://jpackage.hmdc.harvard.edu/JPackage/1.7/generic/RPMS.non-free/"/>. </para> </note> <important> <para> A <literal>-compat</literal> package is not required for RPM installations. The <literal>-compat</literal> package performs the same SysV service script set up that the RPM version of the JDK installer does. </para> </important> <procedure> <title>Installing the JDK on Windows</title> <step> <para> Using Explorer, double-click the downloaded self-extracting installer and follow the instructions to install the JDK. </para> </step> </procedure> <formalpara> <!-- id="jdkicar-binary-Java_Development_Kit-Configuring"> --><title>Configuring</title> <para> Configuring the system for the JDK consists of two tasks: setting the <envar>JAVA_HOME</envar> environment variable, and ensuring the system is using the proper JDK (or JRE) using the <command>alternatives</command> command. Setting <envar>JAVA_HOME</envar> generally overrides the values for <command>java</command>, <command>javac</command> and <command>java_sdk_1.5.0</command> in <command>alternatives</command>, however it is recommended to specify the value for consistency. </para> </formalpara> <variablelist> <varlistentry> <term>Setting the <envar>JAVA_HOME</envar> Environment Variable on Generic Linux</term> <listitem> <para> After installing the JDK, ensure the <envar>JAVA_HOME</envar> environment variable exists and points to the location of the JDK installation. </para> <formalpara> <title>Setting the <envar>JAVA_HOME</envar> Environment Variable on Linux</title> <para> Determine whether <envar>JAVA_HOME</envar> is set by executing the following command: </para> </formalpara> <screen>~]$ echo $JAVA_HOME </screen> <para> If <envar>JAVA_HOME</envar> is not set, the value must be set to the location of the JDK installation on the system. This can be achieved by adding two lines to the <filename>~/.bashrc</filename> configuration file. Open <filename>~/.bashrc</filename> (or create it if it does not exist) and add a line similar to the following one anywhere inside the file: </para> <programlisting>export JAVA_HOME="/usr/lib/jvm/jdk1.5.0_<version>" </programlisting> <para> The changes should also be applied for other users who will be running the <application condition="mob">Mobicents</application> on the machine (any environment variables <command>export</command>ed from <filename>~/.bashrc</filename> files are local to that user). </para> </listitem> </varlistentry> <varlistentry> <term>Setting <envar>java</envar>, <envar>javac</envar> and <envar>java_sdk_1.5.0</envar> using the <command>alternatives</command> command </term> <listitem> <formalpara> <title>Selecting the Correct System JVM on Linux using <command>alternatives</command></title> <para> On systems with the <command>alternatives</command> command, including Red Hat Enterprise Linux and Fedora, it is possible to choose which JDK (or JRE) installation to use, as well as which <command>java</command> and <command>javac</command> executables should be run when called. </para> </formalpara> <para> <emphasis>As the superuser</emphasis>, call <command>/usr/sbin/alternatives</command> with the <option>--config java</option> option to select between JDKs and JREs installed on your system: </para> <programlisting> home]$ sudo /usr/sbin/alternatives --config java There are 3 programs which provide 'java'. Selection Command ----------------------------------------------- 1 /usr/lib/jvm/jre-1.5.0-gcj/bin/java 2 /usr/lib/jvm/jre-1.6.0-sun/bin/java *+ 3 /usr/lib/jvm/jre-1.5.0-sun/bin/java Enter to keep the current selection[+], or type selection number: </programlisting> <para> The Sun JDK, version 5, is required to run the <command>java</command> executable. In the <command>alternatives</command> information printout above, a plus (<literal>+</literal>) next to a number indicates the option currently being used. Press <keycap>Enter</keycap> to keep the current JVM, or enter the number corresponding to the JVM to select that option. </para> <para> As the superuser, repeat the procedure above for the <command>javac</command> command and the <literal>java_sdk_1.5.0</literal> environment variable: </para> <screen>home]$ sudo /usr/sbin/alternatives --config javac </screen> <screen>home]$ sudo /usr/sbin/alternatives --config java_sdk_1.5.0 </screen> </listitem> </varlistentry> <varlistentry> <term>Setting the <envar>JAVA_HOME</envar> Environment Variable on Windows</term> <listitem> <para> For information on how to set environment variables in Windows, refer to <ulink url="http://support.microsoft.com/kb/931715"/>. </para> </listitem> </varlistentry> </variablelist> <formalpara> <!-- id="jdkicar-binary-Java_Development_Kit-Testing"> --><title>Testing</title> <para> To ensure the correct JDK or Java version (5 or higher), and that the java executable is in the <envar>PATH</envar> environment variable, run the <command>java -version</command> command in the terminal from the home directory: </para> </formalpara> <programlisting> home]$ java -version java version "1.5.0_16" Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_16-b03) Java HotSpot(TM) Client VM (build 1.5.0_16-b03, mixed mode, sharing) </programlisting> <!-- <formalpara id="jdkicar-binary-Java_Development_Kit-Running"> <title>Running</title> <para></para> </formalpara> --><!-- <formalpara id="jdkicar-binary-Java_Development_Kit-Stopping"> <title>Stopping</title> <para></para> </formalpara> --> <formalpara> <!-- id="jdkicar-binary-Java_Development_Kit-Uninstalling"> --><title>Uninstalling</title> <para> It is not necessary to remove a particular JDK from a system, because the JDK and JRE version can be switched as required using the <command>alternatives</command> command, and/or by setting <envar>JAVA_HOME</envar>. </para> </formalpara> <formalpara> <title>Uninstalling the JDK on Linux</title> <para> On RPM-based systems, uninstall the JDK using the <command>yum remove <jdk_rpm_name></command> command. </para> </formalpara> <formalpara> <title>Uninstalling the JDK on Windows</title> <para> On Windows systems, check the JDK entry in the <literal>Start</literal> menu for an uninstall option, or use <literal>Add/Remove Programs</literal>. </para> </formalpara> </section> <section id="itms-binary-Media_Server-Installing_Configuring_and_Running"> <title>Binary Distribution: Installing, Configuring and Running</title> <para> The Media Server distribution comes bundled with the JBoss Application Server, the latest version of the JAIN SLEE Server, and all of the resource adapters required to run the various bundled examples. </para> <section id="itms-binary-Media_Server-PreInstall_Requirements_and_Prerequisites"> <title>Pre-Install Requirements and Prerequisites</title> <para> Ensure that the following requirements have been met before continuing with the install. </para> <variablelist id="itms-binary-Media_Server-Hardware_Requirements"> <title>Hardware Requirements</title> <varlistentry> <term>Sufficient Disk Space</term> <listitem> <para> Once unzipped, the Media Server binary release requires <emphasis>at least</emphasis> 110 MB of free disk space. Keep in mind that disk space requirements may change from release to release. </para> </listitem> </varlistentry> <varlistentry> <term>Anything Java Itself Will Run On</term> <listitem> <para> The Media Server and its bundled servers, JBoss and JAIN SLEE, are 100% Java. The Media Server will run on the same hardware that the JBoss Application Server runs on. </para> </listitem> </varlistentry> </variablelist> <variablelist id="itms-binary-Media_Server-Software_Prerequisites"> <title>Software Prerequisites</title> <varlistentry> <term>JDK 5 or Higher</term> <listitem> <para> A working installation of the Java Development Kit (<acronym>JDK</acronym>) version 5 or higher is required in order to run the Media Server. Note that the JBoss Application Server is a runtime dependency of the Media Server and, as mentioned, comes bundled with the binary distribution. </para> </listitem> </varlistentry> </variablelist> </section> <section id="itms-binary-Media_Server-Downloading"> <title>Downloading</title> <para> The latest version of the Media Server is available from <ulink url="http://www.mobicents.org/mms-downloads.html"/>. The top row of the table holds the latest version. Click the <literal>Download</literal> link to start the download. </para> </section> <section id="itms-binary-Media_Server-Installing"> <title>Installing</title> <para> Once the requirements and prerequisites have been met and you have downloaded the binary distribution zip file, you are ready to install the Media Server. Follow the instructions below for your platform, whether Linux or Windows. </para> <note id="itms-Media_Server-Use_Version_Numbers_Relevant_to_Your_Installation"> <title>Version Number</title> <para> For clarity, the command line instructions presented in this chapter use specific version numbers and directory names. Ensure this information is substituted with the binary distribution's version numbers and file names. </para> </note> <procedure> <title>Installing the Media Server Binary Distribution on Linux</title> <para> It is assumed that the downloaded archive is saved in the home directory, and that a terminal window is open displaying the home directory. </para> <step> <para> Create a subdirectory to extract the files into. For ease of identification, it is recommended that the version number of the binary is included in this directory name. </para> <screen>~]$ mkdir <quote>ms-<replaceable><version></replaceable></quote> </screen> </step> <step> <para> Move the downloaded zip file into the directory: </para> <screen>~]$ mv <quote>mobicents-media-server-all-1.1.0.GA.zip</quote><quote>ms-<replaceable><version></replaceable></quote> </screen> </step> <step> <para> Move into the directory: </para> <screen>~]$ cd <quote>ms-<replaceable><version></replaceable></quote> </screen> </step> <step> <para> Extract the files into the current directory by executing one of the following commands. <itemizedlist> <listitem> <para> Java: <screen>ms-<replaceable><version></replaceable>]$ jar -xvf <quote>mobicents-media-server-all-1.1.0.GA.zip</quote> </screen> </para> </listitem> <listitem> <para> Linux: <screen>mss-jboss-<version>]$ unzip "mss-1.0.0-jboss-4.2.3.GA-0904211307.zip" </screen> </para> </listitem> </itemizedlist> <note> <para> Alternatively, use <command>unzip</command> -d <unzip_to_location> to extract the zip file's contents to a location other than the current directory. </para> </note> </para> </step> <step> <para> Consider deleting the archive, if free disk space is an issue. </para> <screen>ms-<replaceable><version></replaceable>]$ rm <quote>mobicents-media-server-all-1.1.0.GA.zip</quote> </screen> </step> </procedure> <procedure> <title>Installing the Media Server Binary Distribution on Windows</title> <step> <para> For this procedure, it is assumed that the downloaded archive is saved in the <filename>My Downloads</filename> folder. </para> </step> <step> <para> Create a subfolder in <filename>My Downloads</filename> to extract the zip file's contents into. For ease of identification, it is recommended that the version number of the binary is included in the folder name. For example, <filename>ms-<version></filename>. </para> </step> <step> <para> Extract the contents of the archive, specifying the destination folder as the one created in the previous step. </para> </step> <step> <para> Alternatively, execute the <command>jar -xvf</command> command to extract the binary distribution files from the zip archive. </para> <orderedlist> <listitem> <para> Move the downloaded zip file from <filename>My Downloads</filename> to the folder created in the previous step. </para> </listitem> <listitem> <para> Open the Windows Command Prompt and navigate to the folder that contains the archive using the <command>cd</command> command </para> </listitem> <listitem> <para> Execute the <command>jar -xvf</command> command to extract the archive contents into the current folder. </para> <screen>C:\Users\<user>\My Downloads\ms-<version>\jar -xvf "mobicents-media-server-all-1.1.0.GA.zip" </screen> </listitem> </orderedlist> </step> <step> <para> It is recommended that the folder holding the MSS for JBoss files (in this example, the folder named <filename>mss-jboss-<replaceable><version></replaceable></filename>) is moved to a user-defined location for storing executable programs. For example, the <filename>Program Files</filename> folder. </para> </step> <step> <para> Consider deleting the archive, if free disk space is an issue. </para> <screen>C:\Users\<user>\My Downloads\mss-jboss-<version>\delete "mobicents-media-server-all-1.1.0.GA.zip" </screen> </step> </procedure> </section> <section lang="en-US"> <title>Setting the JBOSS_HOME Environment Variable</title> <para> The <application>Mobicents Platform</application> (<application>Mobicents</application>) is built on top of the <application>JBoss Application Server</application> (<application>JBoss AS</application>). You do not need to set the <envar>JBOSS_HOME</envar> environment variable to run any of the <application>Mobicents Platform</application> servers <emphasis>unless</emphasis> <envar>JBOSS_HOME</envar> is <emphasis>already</emphasis> set. </para> <para> The best way to know for sure whether <envar>JBOSS_HOME</envar> was set previously or not is to perform a simple check which may save you time and frustration. </para> <formalpara> <title>Checking to See If JBOSS_HOME is Set on Linux</title> <para> At the command line, <command>echo</command><userinput>$JBOSS_HOME</userinput> to see if it is currently defined in your environment: </para> </formalpara> <!-- ~]$ echo $JBOSS_HOME --> <screen>~]$ echo $JBOSS_HOME </screen> <para> The <application>Mobicents Platform</application> and most Mobicents servers are built on top of the <application>JBoss Application Server</application> (<application>JBoss AS</application>). When the <application>Mobicents Platform</application> or Mobicents servers are built <emphasis>from source</emphasis>, then <envar>JBOSS_HOME</envar> <emphasis>must</emphasis> be set, because the Mobicents files are installed into (or <quote>over top of</quote> if you prefer) a clean <application>JBoss AS</application> installation, and the build process assumes that the location pointed to by the <envar>JBOSS_HOME</envar> environment variable at the time of building is the <application>JBoss AS</application> installation into which you want it to install the Mobicents files. </para> <para> This guide does not detail building the <application>Mobicents Platform</application> or any Mobicents servers from source. It is nevertheless useful to understand the role played by <application>JBoss AS</application> and <envar>JBOSS_HOME</envar> in the Mobicents ecosystem. </para> <para> The immediately-following section considers whether you need to set <envar>JBOSS_HOME</envar> at all and, if so, when. The subsequent sections detail how to set <envar>JBOSS_HOME</envar> on Linux and Windows </para> <important> <para> Even if you fall into the category below of <emphasis>not needing</emphasis> to set <envar>JBOSS_HOME</envar>, you may want to for various reasons anyway. Also, even if you are instructed that you do <emphasis>not need</emphasis> to set <envar>JBOSS_HOME</envar>, it is good practice nonetheless to check and make sure that <envar>JBOSS_HOME</envar> actually <emphasis>isn't</emphasis> set or defined on your system for some reason. This can save you both time and frustration. </para> </important> <bridgehead>You <emphasis>DO NOT NEED</emphasis> to set <envar>JBOSS_HOME</envar> if...</bridgehead> <itemizedlist> <listitem> <para> ...you have installed the <application>Mobicents Platform</application> binary distribution. </para> </listitem> <listitem> <para> ...you have installed a Mobicents server binary distribution <emphasis>which bundles <application>JBoss AS</application>.</emphasis> </para> </listitem> </itemizedlist> <bridgehead>You <emphasis>MUST</emphasis> set <envar>JBOSS_HOME</envar> if you are:</bridgehead> <itemizedlist> <listitem> <para> installing the <application>Mobicents Platform</application> or any of the Mobicents servers <emphasis>from source</emphasis>. </para> </listitem> <listitem> <para> installing the <application>Mobicents Platform</application> binary distribution, or one of the Mobicents server binary distributions, which <emphasis>do not</emphasis> bundle <application>JBoss AS</application>. </para> </listitem> </itemizedlist> <para> Naturally, if you installed the <application>Mobicents Platform</application> or one of the Mobicents server binary releases which <emphasis>do not</emphasis> bundle <application>JBoss AS</application>, yet requires it to run, then you should <ulink url="http://www.jboss.org/file-access/default/members/jbossas/freezone/docs/Installation_Guide/4/html/index.html">install <application>JBoss AS</application></ulink> before setting <envar>JBOSS_HOME</envar> or proceeding with anything else. </para> <formalpara> <title>Setting the JBOSS_HOME Environment Variable on Linux</title> <para> The <envar>JBOSS_HOME</envar> environment variable must point to the directory which contains all of the files for the <phrase><application>Mobicents Platform</application> or individual Mobicents server</phrase> that you installed. As another hint, this topmost directory contains a <filename>bin</filename> subdirectory. </para> </formalpara> <para> Setting <envar>JBOSS_HOME</envar> in your personal <filename>~/.bashrc</filename> startup script carries the advantage of retaining effect over reboots. Each time you log in, the environment variable is sure to be set for you, as a user. On Linux, it is possible to set <envar>JBOSS_HOME</envar> as a system-wide environment variable, by defining it in <filename>/etc/bashrc</filename>, but this method is neither recommended nor detailed in these instructions. </para> <procedure> <title>To Set JBOSS_HOME on Linux</title> <step> <para> Open the <filename>~/.bashrc</filename> startup script, which is a hidden file in your home directory, in a text editor, and insert the following line on its own line while substituting for the actual install location on your system: </para> <!-- export JBOSS_HOME="/home/<replaceable><username></replaceable>/<replaceable><path></replaceable>/<replaceable><to></replaceable>/<replaceable><install_directory></replaceable>" --> <screen>export JBOSS_HOME="/home/<username>/<path>/<to>/<install_directory>" </screen> </step> <step> <para> Save and close the <filename>.bashrc</filename> startup script. </para> </step> <step> <para> You should <command>source</command> the <filename>.bashrc</filename> script to force your change to take effect, so that <envar>JBOSS_HOME</envar> becomes set for the current session. Note that any other terminals which were opened prior to altering <filename>.bashrc</filename> will need to <command>source</command><filename>~/.bashrc</filename> as well should they require access to <envar>JBOSS_HOME</envar>. </para> <screen>~]$ source ~/.bashrc </screen> </step> <step> <para> Finally, ensure that <envar>JBOSS_HOME</envar> is set in the current session, and actually points to the correct location: </para> <note> <para> The command line usage below is based upon a binary installation of the <application>Mobicents Platform</application>. In this sample output, <envar>JBOSS_HOME</envar> has been set correctly to the <replaceable>topmost_directory</replaceable> of the <application>Mobicents</application> installation. Note that if you are installing one of the standalone <application>Mobicents</application> servers (with <application>JBoss AS</application> bundled!), then <envar>JBOSS_HOME</envar> would point to the <replaceable>topmost_directory</replaceable> of your server installation. </para> </note> <screen>~]$ echo $JBOSS_HOME /home/user/mobicents-all-1.2.1.GA-jboss-4.2.3.GA/jboss/ </screen> </step> </procedure> <formalpara> <title>Setting the JBOSS_HOME Environment Variable on Windows</title> <para> The <envar>JBOSS_HOME</envar> environment variable must point to the directory which contains all of the files for the <phrase>Mobicents Platform or individual Mobicents server</phrase> that you installed. As another hint, this topmost directory contains a <filename>bin</filename> subdirectory. </para> </formalpara> <para> For information on how to set environment variables in recent versions of Windows, refer to <ulink url="http://support.microsoft.com/kb/931715"/>. </para> </section> <section id="itms-binary-Media_Server-Running"> <title>Running</title> <para> In the Linux terminal or Command Prompt, you will be able to tell that the Media Server started successfully if the last line of output is similar to the following (ending with <quote>Started in 23s:648ms</quote>): </para> <programlisting>11:23:07,656 INFO [Server] JBoss (MX MicroKernel) [4.2.2.GA (build: SVNTag=JBoss_4_2_2_GA date=200710221139)] Started in 23s:648ms </programlisting> <procedure> <title>Running the Media Server on Linux</title> <step> <para> Change the working directory to installation directory (the one in which the zip file's contents was extracted to) </para> <screen>downloads]$ cd "ms-<version>" </screen> </step> <step> <para> (Optional) Ensure that the <filename>bin/run.sh</filename> start script is executable. </para> <screen>ms-<version>]$ chmod +x bin/run.sh </screen> </step> <step> <para> Execute the <filename>run.sh</filename> Bourne shell script. </para> <screen>ms-<version>]$ ./bin/run.sh </screen> </step> </procedure> <note> <para> Instead of executing the Bourne shell script to start the server, the <filename>run.jar</filename> executable Java archive can be executed from the <filename>bin</filename> directory: </para> <screen>ms-<version>]$ java -jar bin/run.jar </screen> </note> <procedure> <title>Running Media Server on <productname>Windows</productname> </title> <step> <para> Using Windows Explorer, navigate to the <filename>bin</filename> subfolder in the installation directory. </para> </step> <step> <para> The preferred way to start the Media Server is from the Command Prompt. The command line interface displays details of the startup process, including any problems encountered during the startup process. </para> <para> Open the Command Prompt via the <guilabel>Start</guilabel> menu and navigate to the correct folder: </para> <screen>C:\Users\<user>\My Downloads> cd "ms-<version>" </screen> </step> <step> <para> Start the JBoss Application Server by executing one of the following files: <itemizedlist> <listitem> <para> <filename>run.bat</filename> batch file: </para> <screen>C:\Users\<user>\My Downloads\ms-<version>>bin\run.bat </screen> </listitem> <listitem> <para> <filename>run.jar</filename> executable Java archive: </para> <screen>C:\Users\<user>My Downloads\mss-jboss-<version>>java -jar bin\run.jar </screen> </listitem> </itemizedlist> </para> </step> </procedure> </section> <section id="itms-binary-Media_Server-Stopping"> <title>Stopping the Media Server</title> <para> Detailed instructions for stopping the JBoss Application Server are given below, arranged by platform. If the server is correctly stopped, the following three lines are displayed as the last output in the Linux terminal or Command Prompt: </para> <programlisting>[Server] Shutdown complete Shutdown complete Halting VM </programlisting> <procedure> <title>Stopping the Media Server on Linux</title> <step> <para> Change the working directory to the binary distribution's install directory. </para> <screen>~]$ cd "ms-<version>" </screen> </step> <step> <para> (Optional) Ensure that the bin/shutdown.sh start script is executable: </para> <screen>ms-<version>]$ chmod +x bin/shutdown.sh </screen> </step> <step> <para> Run the <filename>shutdown.sh</filename> executable Bourne shell script with the <option>-S</option> option (the short option for <option>--shutdown</option>) as a command line argument: </para> <screen>ms-<version>]$ ./bin/shutdown.sh -S </screen> </step> </procedure> <note> <para> The <filename>shutdown.jar</filename> executable Java archive with the <option>-S</option> option can also be used to shut down the server: </para> <screen>ms-<version>]$ java -jar bin/shutdown.jar -S </screen> </note> <procedure> <title>Stopping MSS for JBoss on Windows</title> <step> <para> Stopping the JBoss Application Server on Windows consists of executing either the <filename>shutdown.bat</filename> or the <filename>shutdown.jar</filename> executable file in the <filename>bin</filename> subfolder of the MSS for JBoss binary distribution. Ensure the <option>-S</option> option (the short option for <option>--shutdown</option>) is included in the command line argument. </para> <screen>C:\Users\<user>\My Downloads\ms-<version>>bin\shutdown.bat -S </screen> <stepalternatives> <step> <para> The <filename>shutdown.jar</filename> executable Java archive with the <option>-S</option> option can also be used to shut down the server: </para> <screen>C:\Users\<user>\My Downloads\ms-<version>>java -jar bin\shutdown.jar -S </screen> </step> </stepalternatives> </step> </procedure> </section> <section id="itms-binary-Media_Server-Using"> <title>Using</title> <para> The Media Server can be controlled using the Management Console, which is started along with the server. </para> </section> <section id="itms-Server_Structure"> <title>Server Structure</title> <para> Now the server is installed, it is important to understand the layout of the server directories. An understanding of the server strucutre is useful when deploying examples, and making configuration changes. It is also useful to understand what components can be removed to reduce the server boot time. </para> <para> The directory structure in the Media Server installation directory is named using a standard structure. <xref linkend="tab-mss-Directory-Structure"/> describes each directory, and the type of information contained within each location. </para> <table frame="all" id="tab-mss-Directory-Structure"> <title>Directory Structure</title> <tgroup align="left" cols="2" colsep="1" rowsep="1"> <colspec colname="c1"/> <colspec colname="c2"/> <thead> <row> <entry align="center"> Directory Name </entry> <entry align="center"> Description </entry> </row> </thead> <tbody> <row> <entry> bin </entry> <entry> Contains the entry point JARs and start-up scripts included with the Media Server distribution. </entry> </row> <row> <entry> conf </entry> <entry> Contains the core services that are required for the server. This includes the bootstrap descriptor, log files, and the default bootstrap-beans.xml configuration file. </entry> </row> <row> <entry> deploy </entry> <entry> Contains the dynamic deployment content required by the hot deployment service. The deploy location can be overridden by specifying a location in the URL attribute of the URLDeploymentScanner configuration item. </entry> </row> <row> <entry> lib </entry> <entry> Contains the startup JAR files used by the server. </entry> </row> <row> <entry> log </entry> <entry> Contains the logs from the bootstrap logging service. The <filename>log</filename> directory is the default directory into which the bootstrap logging service places its logs, however, the location can be overridden by altering the log4j.xml configuration file. This file is located in the <filename>/conf</filename> directory. </entry> </row> </tbody> </tgroup> </table> <para> The Media Server uses a number of XML configuration files that control various aspects of the server. <xref linkend="tab-mss-Core_Configuration_File_Set"/> describes the location of the key configuration files, and provides a description of the </para> <table frame="all" id="tab-mss-Core_Configuration_File_Set"> <title>Core Configuration File Set</title> <tgroup align="left" cols="2" colsep="1" rowsep="1"> <colspec colname="c1"/> <colspec colname="c2"/> <thead> <row> <entry align="center"> File Name and Location </entry> <entry align="center"> Description </entry> </row> </thead> <tbody> <row> <entry> conf/bootstrap-beans.xml </entry> <entry> Specifies which additional microcontainer deployments are loaded as part of the bootstrap phase. bootstrap-beans.xml references other configuration files contained in the <filename>/conf/bootstrap/</filename> directory. For a standard configuration, the bootstrap configuration files require no alteration. </entry> </row> <row> <entry> conf/log4j.properties </entry> <entry> Specifies the Apache <literal>log4j</literal> framework category priorities and appenders used by the Media Server. </entry> </row> <row> <entry> deploy/ann-beans.xml </entry> <entry> Specifies the configuration for announcement access points. </entry> </row> <row> <entry> deploy/ivr-beans.xml </entry> <entry> Specifies the configuration for Interactive Voice Response (IVR) endpoints. </entry> </row> <row> <entry> deploy/prelay-beans.xml </entry> <entry> Specifies the configuration for Packet Relay endpoints. </entry> </row> <row> <entry> deploy/cnf-beans.xml </entry> <entry> Specifies the configuration for Conference endpoints. </entry> </row> <row> <entry> deploy/test-beans.xml </entry> <entry> Specifies the endpoint for test capabilities. </entry> </row> <row> <entry> deploy/mgcp-conf.xml </entry> <entry> Specifies the configuration for the MGCP controller. </entry> </row> </tbody> </tgroup> </table> </section> <section id="itms-binary-Media_Server-Testing"> <title>Testing</title> <para> For information on testing the Media Server, refer to <xref linkend="itms-Writing_and_Running_Tests_Against_the_Media_Server"/>. </para> </section> <section id="itms-binary-Media_Server-Uninstalling"> <title>Uninstalling</title> <para> To uninstall the Media Server, simply delete the directory you decompressed the binary distribution archive into. </para> </section> </section> <section id="itms-Writing_and_Running_Tests_Against_the_Media_Server"> <title>Writing and Running Tests Against the Media Server</title> <para> For information about the different kinds of tests that the Media Server provides, refer to <ulink url="http://groups.google.com/group/mobicents-public/web/mobicents-ms-tests">Writing and Running Tests Against MMS</ulink>. </para> </section> </chapter> <chapter id="ctms-Configuring_the_Media_Server" lang="en-US"> <!-- chapter id nickname: ctms --><title>Configuring the Media Server</title> <para> <!-- After the Media Server has successfully started, you can then locate the JMX console at <ulink url="http://localhost:8080/jmx-console"/> in the default distribution. Note that if you have configured the servlet container (for example, Tomcat) to service a different port, then you will need to supply a different port number in the URL. -->All endpoints are plugged into the JAIN SLEE server using JBoss Microcontainers. To create a component for the Media Server, the appropriate component Factory must be used. Each component within a factory has an identifier and name that is unique across the server implementation. Because each component is unique in the Media Server, it can be referenced and pulled into other applications. </para> <!-- <mediaobject id="mediaobj-mms-MMSConfiguration-ss-JMXConsole.gif"> <imageobject> <imagedata align="center" width="700" fileref="images/mms-MMSConfiguration-ss-JMXConsole.gif" format="GIF" /> </imageobject> </mediaobject> --> <section> <title>Application Wiring</title> <para> The process of implementing an application in the Media Server involves specifying the path over which media information travels. This process is referred to as "Wiring" the application. Wiring connects media sources, components, and sinks together so data can interact with all stages of the data transfer process. </para> <para> The following diagram provides an example of a basic wiring schematic for components in Media Server. </para> <mediaobject> <imageobject> <imagedata align="center" fileref="images/mms-ApplicationWiring-dia-Media_Flow_Path.png" format="PNG" scalefit="1" width="400"/> </imageobject> <caption>Media Flow Path</caption> </mediaobject> <para> The primary media components depicted in the diagram are: </para> <itemizedlist> <listitem> <para> Source components, which generate media content (refer to <xref linkend="ctms-Media_Source_Interface"/> for the base interface). </para> </listitem> <listitem> <para> Sink components, which consume media content (refer to <xref linkend="ctms-Media_Sink_Interface"/> for the base interface. </para> </listitem> <listitem> <para> Inlets, which provide access to a media sink within the media flow (refer to <xref linkend="ctms-Inlet_Interface"/> for the base interface). </para> </listitem> <listitem> <para> Outlets, which access the output from an Inlet as a media source in the media flow (refer to <xref linkend="ctms-Outlet_Interface"/> for the base interface). </para> </listitem> <listitem> <para> Channels and Pipes, which join the elements that form a media flow path (refer to <xref linkend="Channels_and_Pipes"/> for the base interface). </para> </listitem> </itemizedlist> <para> The components that can exist between a Source and Sink vary between each wiring implementation. For example, some wiring implementations contain components that are not Source or Sink components as such, but provide access to components that emulate sources and sinks. In the diagram, <literal>Component A</literal> represents a source (Input) and <literal>Component B</literal> represents a sink (Output) in the media flow. </para> <para> Also note that <literal>Component A</literal> provides a Source to the implementation <literal>Composite A</literal>. <literal>Composite A</literal> can use the Input received from <literal>Component A</literal> to supply the Source for the application. </para> <formalpara> <title>Media Sources and Sinks</title> <para> The Media Source and Media Sink interfaces define the general wiring principle for a component, including media transition and handling. Each media source object generates media after receiving a start() command, while each media sink implements the data handling logic within the receive method. Each source and sink pair defines the connection method used to wire the components in a media stream together. </para> </formalpara> <para> <xref linkend="ctms-Media_Source_Interface"/> describes the base interfaces for the Media Source container. </para> <example id="ctms-Media_Source_Interface"> <title>Base Media Source Interface</title> <programlisting linenumbering="unnumbered" role="JAVA"> public interface MediaSource extends Component { /** * Joins this source with media sink. * * @param sink the media sink to join with. */ public void connect(MediaSink sink); /** * Drops connection between this source and media sink. * * @param sink the sink to disconnect. */ public void disconnect(MediaSink sink); /** * Starts media production. */ public void start(); /** * Terminates media production. */ public void stop(); /** * Get possible formats in which this source can stream media. * * @return an array of Format objects. */ public Format[] getFormats();} </programlisting> </example> <para> <xref linkend="ctms-Media_Sink_Interface"/> describes the base interfaces for the Media Sink container. </para> <example id="ctms-Media_Sink_Interface"> <title>Base Media Sink Interface</title> <programlisting linenumbering="unnumbered" role="JAVA"> public interface MediaSink extends Component { /** * Get possible formats which this consumer can handle. * * @return an array of Format objects. */ public Format[] getFormats(); /** * Checks is the specified format is acceptable by this source. * This method is used by DEMUX to perform proper demultiplexing. * * @param format the format to check. * @return true if this source can handle specified format. */ public boolean isAcceptable(Format format); /** * Joins this media sink with media source. * The concrete media sink can allow to join with multiple sources * * @param source the media source to join with. */ public void connect(MediaSource source); /** * Breaks connection with media source. * The concrete media sink can allow to join with multiple sources so * this method requires the explicit source for disconnection. * * @param source the source to disconnect from. */ public void disconnect(MediaSource source); /** * This method is called by media source when new media is available * * @param buffer the Buffer object which contains the next portion of media. */ public void receive(Buffer buffer); //public void dispose(); } </programlisting> </example> <formalpara> <title>Inlets and Outlets</title> <para> Inlets and Outlets are used to aggregate several wired components into a single object.. <xref linkend="ctms-Inlet_Interface"/> describes the base interfaces for the Inlet container. </para> </formalpara> <example id="ctms-Inlet_Interface"> <title>Inlet Interface</title> <programlisting linenumbering="unnumbered" role="JAVA"> public interface Inlet extends Component { /** * Provides access to the media sink. * * @return the reference to the media sink. */ public MediaSink getInput(); } </programlisting> </example> <para> <xref linkend="ctms-Outlet_Interface"/> describes the base interfaces for the Outlet container. </para> <example id="ctms-Outlet_Interface"> <title>Outlet Interface</title> <programlisting linenumbering="unnumbered" role="JAVA"> public interface Outlet extends Component { /** * Provides access to the media source. * * @return the reference to the media source. */ public MediaSource getOutput(); } </programlisting> </example> <formalpara id="Channels_and_Pipes"> <title>Channels and Pipes</title> <para> Channels allow Media Sources and Sinks to be joined with other channels by creating pipes between each component. Using multiplexers and demultiplexers, the media stream can be merged or split. Within the stream, different signal processors can be connected to increase performance or provide greater flexibility. Media Server uses declarative channel construction to construct customized media flow paths by utilizing pipes. </para> </formalpara> <para> <xref linkend="ctms-AudioProcessorFactory_Deployment_Descriptor"/> shows the AudioProcessorFactory deployment descriptor, which contains the codecFactories used by applications. </para> <example id="ctms-AudioProcessorFactory_Deployment_Descriptor"> <title>AudioProcessorFactory Deployment Descriptor</title> <programlisting linenumbering="unnumbered" role="XML"> ... <bean name="AudioProcessorFactory" class="org.mobicents.media.server.impl.dsp.DspFactory"> <property name="name">audio.processor</property> <property name="codecFactories"> <list> <inject bean="G711.UlawEncoderFactory" /> <inject bean="G711.UlawDecoderFactory" /> <inject bean="G711.AlawEncoderFactory" /> <inject bean="G711.AlawDecoderFactory" /> <inject bean="SpeexEncoderFactory" /> <inject bean="SpeexDecoderFactory" /> <inject bean="GSMEncoderFactory" /> <inject bean="GSMDecoderFactory" /> <inject bean="G729EncoderFactory" /> <inject bean="G729DecoderFactory" /> </list> </property> </bean> ... </programlisting> </example> <para> The codec factories listed in the deployment descriptor each represent a component that can be deployed inside a media stream. To create a pipe to another component, the component name is appended to the <property> element inside the <bean> element. <xref linkend="ctms-IVR_Pipes"/> shows a series of Interactive Voice Response (IVR) pipes. </para> <example id="ctms-IVR_Pipes"> <title>IVR Pipes</title> <programlisting linenumbering="unnumbered" role="XML"> <bean name="IVR-Pipe-1" class="org.mobicents.media.server.resource.PipeFactory"> <property name="outlet">audio.processor</property> </bean> <bean name="IVR-Pipe-2" class="org.mobicents.media.server.resource.PipeFactory"> <property name="inlet">audio.processor</property> <property name="outlet">DeMux</property> </bean> <bean name="IVR-Pipe-3" class="org.mobicents.media.server.resource.PipeFactory"> <property name="inlet">DeMux</property> <property name="outlet">Rfc2833DetectorFactory</property> </bean> <bean name="IVR-Pipe-4" class="org.mobicents.media.server.resource.PipeFactory"> <property name="inlet">DeMux</property> </bean> </bean> </programlisting> </example> <para> In <literal>IVR-Pipe-1</literal>, the <literal>audio.processor</literal> component is used to activate the codec factories for the media stream. Notice how each pipe is connected; the <literal>outlet</literal> attribute is used as the <literal>inlet</literal> attribute value for the next pipe. </para> <para> To use the channels together with the pipes, a channel declaration must be specified. In this declaration, the components and pipes are explicitly stated. <xref linkend="ctms-IVR_Channel_Declaration"/> shows the IVR-RxChannelFactory channel declaration, which contains element from <xref linkend="ctms-AudioProcessorFactory_Deployment_Descriptor"/> and <xref linkend="ctms-IVR_Pipes"/> </para> <example id="ctms-IVR_Channel_Declaration"> <title>IVR Channel Declaration</title> <programlisting linenumbering="unnumbered" role="XML"> <bean name="IVR-RxChannelFactory" class="org.mobicents.media.server.resource.ChannelFactory"> <property name="components"> <list> <inject bean="DeMuxFactory" /> <inject bean="Rfc2833DetectorFactory" /> <inject bean="AudioProcessorFactory" /> </list> </property> <property name="pipes"> <list> <inject bean="IVR-Pipe-1" /> <inject bean="IVR-Pipe-2" /> <inject bean="IVR-Pipe-3" /> <inject bean="IVR-Pipe-4" /> </list> </property> </bean> </programlisting> </example> </section> <section> <title>Media Buffer</title> <para> The media transition process strongly depends on underlying layer capabilities. In an IP-based network, the Real-time Transmission Protocol (RTP) is used to transmit data. However, in a Time Division Multiplexing (TDM) network, a circuit channel is used for transmission. For exchanging media data between components within the Media Server, a special container is used. </para> <para> A Media Buffer is a media-data container within the Media server, which carries data from one processing component to the next. A Buffer object maintains information such as the time stamp, length, data format, and header information required to process the media data. </para> <para> A Buffer object contains the following attributes: </para> <variablelist> <varlistentry> <term>data</term> <listitem> <para> Specifies the internal data object that retains the media chunk in the buffer. An array of bytes is used by default. </para> </listitem> </varlistentry> <varlistentry> <term>offset</term> <listitem> <para> Specifies the offset into the data array where the valid data begins. </para> </listitem> </varlistentry> <varlistentry> <term>length</term> <listitem> <para> Specifies the valid data length present in the buffer. </para> </listitem> </varlistentry> <varlistentry> <term>format</term> <listitem> <para> Specifies the data format present in the buffer. </para> </listitem> </varlistentry> <varlistentry> <term>sequenceNumber</term> <listitem> <para> Specifies the sequence number of the buffer. </para> </listitem> </varlistentry> <varlistentry> <term>timestamp</term> <listitem> <para> Specifies the time stamp (in relative units) of the buffer. </para> </listitem> </varlistentry> <varlistentry> <term>duration</term> <listitem> <para> Specifies the duration (in relative units) of the buffer. </para> </listitem> </varlistentry> </variablelist> </section> <section> <title>Timer</title> <para> The Timer provides a time source, and functions similar to a crystal oscillator. This endpoint can be configured to specify the millisecond interval between two oscillations. </para> <para> The configurable aspect of the Timer is: </para> <variablelist> <varlistentry> <term>heartBeat</term> <listitem> <para> Time interval (in milliseconds) between two subsequent oscillations. </para> </listitem> </varlistentry> </variablelist> </section> <section> <title>MainDeployer</title> <para> The MainDeployer endpoint manages hot deployment of components and enpoints. Hot-deployable components and endpoints are defined as those that can be added to or removed from the running server. </para> <para> MainDeployer scans the <filename>/deploy</filename> directory, looking for configuration files that have changed since the last scan. When MainDeployer detects any changes to the directory, any changes resulting from the removed configuration file are processed. This includes re-deploying changed beans, adding new beans, or removing beans that are no longer required. </para> <para> To understand the functionality of the MainDeployer endpoint, experiment by removing the <filename>ann-beans.xml</filename> configuration file from the <filename>/deploy</filename> directory while the server is running. Observe how the server behaves once the file is removed from the folder. </para> <para> The configurable aspects of MainDeployer are: </para> <variablelist> <varlistentry> <term>path</term> <listitem> <para> Specifies the location of the configuration XML files. Generally, this is the /deploy directory. </para> </listitem> </varlistentry> <varlistentry> <term>scanPeriod</term> <listitem> <para> Specifies the time (in milliseconds) that MainDeployer checks the specified path for changes to the directory. </para> </listitem> </varlistentry> <varlistentry> <term>fileFilter</term> <listitem> <para> Specifies the file extensions that will be deployed or monitored. The FileFilter bean looks like <something missing here, but it needs to be added>. </para> </listitem> </varlistentry> </variablelist> </section> <section id="ctms-RTPFactory"> <title>RTPFactory</title> <para> <literal>RTPFactory</literal> is responsible for managing the actual RTP Socket. The reference of <literal>RTPFactory</literal> is passed to each endpoint which, in turn, leverage the <literal>RTPFactory</literal> to create Connections and decide on supported codecs. </para> <!-- TODO - This example must be updated with the new parameters and structure for the MainDeployer --> <example id="ctms-The_RTPFactory_MBean"> <title>The RTPFactory MBean</title> <programlisting linenumbering="unnumbered" role="XML"><mbean code="org.mobicents.media.server.impl.jmx.rtp.RTPFactory" name="media.mobicents:service=RTPFactory,QID=1"> <attribute name="JndiName">java:media/mobicents/protocol/RTP</attribute> <attribute name="BindAddress">${jboss.bind.address}</attribute> <attribute name="Jitter">60</attribute> <attribute name="PacketizationPeriod">20</attribute> <attribute name="PortRange">1024-65535</attribute> <attribute name="AudioFormats">0 = ULAW, 8000, 8, 1; 3 = GSM, 8000, 8, 1; 8 = ALAW, 8000, 8, 1; 97 = SPEEX, 8000, 8, 1; 101 = telephone-event/8000</attribute> </mbean> </programlisting> </example> <para> The configurable aspects of the RTPFactory are: </para> <variablelist> <varlistentry> <term>formatMap</term> <listitem> <para> Specifies the relationship between the RTP payload type and format. <xref linkend="ctms-Supported_RTP_Formats"/> describes the payload types and their supported formats. </para> </listitem> </varlistentry> <varlistentry> <term>bindAddress</term> <listitem> <para> Specifies the IP address to which the RTP socket is bound. </para> </listitem> </varlistentry> <varlistentry> <term>portRange</term> <listitem> <para> Specifies the port range within which the RTP socket will be created. The first free port in the given range is assigned to the socket. </para> </listitem> </varlistentry> <varlistentry> <term>jitter</term> <listitem> <para> Specifies the size of the jitter buffer (in milliseconds) for incoming packets. </para> </listitem> </varlistentry> <varlistentry> <term>timer</term> <listitem> <para> Specifies the timer instance from which reading process is synchronized. </para> </listitem> </varlistentry> <varlistentry> <term>stunAddress</term> <listitem> <para> Specifies the location of the STUN server to use. For more information regarding STUN, refer to <xref linkend="ctms-MMS_STUN_Support"/>. </para> </listitem> </varlistentry> </variablelist> <formalpara> <title>Supported RTP Formats</title> <para> The <literal>RTPFactory</literal> is able to receive the following RTP media types: </para> </formalpara> <table frame="all" id="ctms-Supported_RTP_Formats"> <title>Supported RTP Formats</title> <tgroup align="left" cols="4" colsep="1" rowsep="1"> <colspec colname="col1" colnum="1"/> <colspec colname="col2" colnum="2"/> <colspec colname="col3" colnum="3"/> <colspec colname="col4" colnum="4"/> <thead> <row> <entry> Payload Type </entry> <entry> Format </entry> <entry> Specification </entry> <entry> Description </entry> </row> </thead> <tbody> <row> <entry> 0 </entry> <entry> PCMU </entry> <entry> <ulink url="http://www.ietf.org/rfc/rfc1890.txt">RFC 1890</ulink> </entry> <entry> ITU G.711 U-law audio </entry> </row> <row> <entry> 3 </entry> <entry> GSM </entry> <entry> <ulink url="http://www.ietf.org/rfc/rfc1890.txt">RFC 1890</ulink> </entry> <entry> GSM full-rate audio </entry> </row> <row> <entry> 8 </entry> <entry> PCMA </entry> <entry> <ulink url="http://www.ietf.org/rfc/rfc1890.txt">RFC 1890</ulink> </entry> <entry> ITU G.711 A-law audio </entry> </row> <row> <entry> 18 </entry> <entry> G729 </entry> <entry> N/A </entry> <entry> G.729 audio </entry> </row> <row> <entry> 31 </entry> <entry> H.261 </entry> <entry> N/A </entry> <entry> Video </entry> </row> <row> <entry> 97 </entry> <entry> SPEEX </entry> <entry> N/A </entry> <entry> Speex narrow band audio </entry> </row> <row> <entry> 101 </entry> <entry> DTMF </entry> <entry> <ulink url="http://www.ietf.org/rfc/rfc2893.txt">RFC 2893</ulink> </entry> <entry> Dual-tone Multi-frequency (DTMF) Events </entry> </row> </tbody> </tgroup> </table> </section> <section id="ctms-Announcement_Server_Access_Points"> <title>Announcement Server Access Points</title> <para> An Announcement Server endpoint provides access to an announcement service. Upon receiving requests from the call agent, an Announcement Server will <quote>play</quote> a specified announcement. A given announcement endpoint is not expected to support more than one connection at a time. Connections to an Announcement Server are typically one-way (<quote>half-duplex</quote>), therefore, the Announcement Server is not expected to listen to audio signals from the connection. </para> <para> Announcement endpoints do not transcode announced media; in order to achieve this, the application must use Packet Relay endpoints on the media path. Also note that the announcement server endpoint can generate a tone such as, for example, DTMF. </para> <example id="ctms-The_AnnEndpointManagement_MBean"> <title>The AnnEndpointManagement MBean</title> <programlisting linenumbering="unnumbered" role="XML"><mbean code="org.mobicents.media.server.impl.jmx.enp.ann.AnnEndpointManagement" name="media.mobicents:endpoint=announcement"> <attribute name="JndiName">media/trunk/Announcement</attribute> <attribute name="RtpFactoryName">java:media/mobicents/protocol/RTP</attribute> </mbean> </programlisting> </example> <formalpara> <title>Configuration of an Announcement Server Access Point</title> <para> The configurable attributes of the Announcement Server are as follows: </para> </formalpara> <variablelist> <varlistentry> <term>localName</term> <listitem> <para> Specifies the name under which the endpoint is to be bound. </para> <para> This parameter allows a set of enpoints to be specified, which are then created and bound automatically by the Announcement Server. Consider the scenario where a total of 10 endpoints are required. To specify this in the attribute, the following path is provided: <literal>/media/aap/[1..10]</literal>. The <literal>[1..10]</literal> in the directory path tells the Announcement Server to create a set of 10 endpoints in the <literal>/aap</literal> directory, named according to the endpoint number, which start at one and finish at ten. For example, <literal>/media/aap/1, media/aap/2, ... media/aap/10</literal>. </para> </listitem> </varlistentry> <varlistentry> <term>timer</term> <listitem> <para> Specifies the timer instance from which reading process is synchronized. </para> </listitem> </varlistentry> <varlistentry> <term>sourceFactory</term> <listitem> <para> Specifies the Java bean responsible for generating the source media. </para> </listitem> </varlistentry> <varlistentry> <term>sinkFactory</term> <listitem> <para> Specifies the Java bean responsible for using the source media generated by the <literal>sourceFactory</literal> bean. </para> </listitem> </varlistentry> <varlistentry> <term>rtpFactory</term> <listitem> <para> Specifies the location of the RTP Factory. For more information about the RTP Factory, refer to <xref linkend="ctms-RTPFactory"/> </para> </listitem> </varlistentry> <varlistentry> <term>txChannelFactory</term> <listitem> <para> Specifies the instance of the custom transmission channel factory. </para> </listitem> </varlistentry> <varlistentry> <term>rxChannelFactory</term> <listitem> <para> Specifies the instance of the custom receiver channel factory. </para> </listitem> </varlistentry> </variablelist> </section> <section id="ctms-Interactive_Voice_Response"> <title>Interactive Voice Response</title> <para> An Interactive Voice Response (<acronym>IVR</acronym>) endpoint provides access to an IVR service. Upon requests from the Call Agent, the IVR server <quote>plays</quote> announcements and tones, and <quote>listens</quote> to voice messages from the user. A given IVR endpoint is not expected to support more than one connection at a time. For example, if several connections were established to the same endpoint, then the same tones and announcements would be played simultaneously over all connections. IVR endpoints do not posses the capability of transcoding played or recorded media streams. IVRs record or play in the format that the data was delivered. </para> <example id="ctms-The_IVREndpointManagement_MBean"> <title>The IVREndpointManagement MBean</title> <programlisting linenumbering="unnumbered" role="XML"><mbean code="org.mobicents.media.server.impl.jmx.enp.ivr.IVRTrunkManagement" name="media.mobicents:endpoint=ivr"> <depends>media.mobicents:service=RTPFactory,QID=1</depends> <attribute name="JndiName">media/trunk/IVR</attribute> <attribute name="RtpFactoryName">java:media/mobicents/protocol/RTP</attribute> <attribute name="MediaType">audio.x_wav</attribute> <!-- DtmfMode can be either RFC2833 or INBAND or AUTO --> <attribute name="DtmfMode">AUTO</attribute> <attribute name="RecordDir">${jboss.server.data.dir}</attribute> <attribute name="Channels">24</attribute> </mbean> </programlisting> </example> <formalpara> <title>Configuration of the Interactive Voice Response Endpoint</title> <para> The configurable attributes of the Interactive Voice Response endpoint are as follows: </para> </formalpara> <variablelist> <varlistentry> <term>JndiName</term> <listitem> <para> The Java Naming and Directory Interface (<acronym>JNDI</acronym>) name under which the endpoint is to be bound. </para> </listitem> </varlistentry> <varlistentry> <term> RtpFactoryName</term> <listitem> <para> The JNDI name of RTPFactory. </para> </listitem> </varlistentry> <varlistentry> <term>RecordDir</term> <listitem> <para> The directory where the recorded files should be created and stored. </para> </listitem> </varlistentry> <varlistentry> <term>Channels</term> <listitem> <para> Controls the number of announcement endpoints available in the server instance , in an endpoints pool. Endpoints are not created dynamically. At any given time the number of endpoints in use can not exceed the <userinput>channels</userinput> value. It is not subject to change during runtime. </para> </listitem> </varlistentry> <varlistentry> <term>MediaType</term> <listitem> <para> It currently defaults to WAV. </para> </listitem> </varlistentry> <varlistentry> <term>DtmfMode</term> <listitem> <para> Controls DTMF detection mode. Possible values are: <userinput>RFC2833</userinput>, <userinput>INBAND</userinput> or <userinput>AUTO</userinput>. </para> </listitem> </varlistentry> </variablelist> <!-- <varlistentry> <term>PacketizationPeriod</term> <listitem> <para>The period of media stream packetization in milliseconds.</para> </listitem> </varlistentry> <varlistentry> <term>PCMA</term> <listitem> <para>Enable or disable G711 (A-law) codec support.</para> </listitem> </varlistentry> <varlistentry> <term>PCMU</term> <listitem> <para>Enable or disable G711 (U-law) codec support.</para> </listitem> </varlistentry> <varlistentry> <term>RecordDir</term> <listitem> <para>The directory where the recorded files should be created and stored.</para> </listitem> </varlistentry> <varlistentry> <term>DTMF</term> <listitem> <para>The dual-tone multi-frequency (<acronym>DTMF</acronym>) type supported. By default it is set to AUTO, but you can also specify INBAND or RFC2833. Note that if you select RFC2833, you <emphasis>also</emphasis> need to specify the DTMF Paylod property. For example:</para> <programlisting linenumbering="unnumbered" role="XML"><![CDATA[ <attribute name="DTMF"> detector.mode=INBAND dtmf.payload = 101 </attribute>]]></programlisting> <variablelist> <varlistentry> <term>detector.mode</term> <listitem> <para>Configures DTMF detector mode. Possible values are AUTO, INBAND or RFC2833.</para> </listitem> </varlistentry> <varlistentry> <term>dtmf.payload</term> <listitem> <para>Configures the payload number if RFC2833 mode is specified.</para> </listitem> </varlistentry> </variablelist> </listitem> </varlistentry> </variablelist> --> <formalpara> <title>Supported Media Types and Formats</title> <para> The supported media types and formats are listed as follows: </para> </formalpara> <variablelist> <varlistentry> <term>WAVE (.wav)</term> <listitem> <para> 16-bit mono/stereo linear </para> </listitem> </varlistentry> </variablelist> <!-- <formalpara> <title>Supported RTP Formats</title> <para>The endpoint is able to receive the follwing RTP media types:</para> </formalpara> <informaltable frame="all"> <tgroup cols="2" align="left" colsep="1" rowsep="1"> <colspec colnum="1" colname="col1"/> <colspec colnum="2" colname="col2"/> <thead> <row> <entry>Media Type</entry> <entry>Payload Number</entry> </row> </thead> <tbody> <row> <entry>Audio: G711 (A-law) 8bit, 8kHz</entry> <entry>8</entry> </row> <row> <entry>Audio: G711 (U-law) 8bit, 8kHz</entry> <entry>0</entry> </row> </tbody> </tgroup> </informaltable> --> <formalpara> <title>Record Directory Configuration</title> <para> You can specify the common directory where all the recorded files should be stored, or simply omit this attribute, in which case the default directory is null, and the application needs to pass an absolute directory path to record to. </para> </formalpara> <formalpara> <title>Supported Packages</title> <para> The supported packages are as follows: </para> </formalpara> <itemizedlist> <listitem> <para> <literal>org.mobicents.media.server.spi.events.Announcement</literal> </para> </listitem> <listitem> <para> <literal>org.mobicents.media.server.spi.events.Basic</literal> </para> </listitem> <listitem> <para> <literal>org.mobicents.media.server.spi.events.AU</literal> </para> </listitem> </itemizedlist> </section> <section id="ctms-Packet_Relay_Endpoint"> <title>Packet Relay Endpoint</title> <para> A packet relay endpoint is a specific form of conference bridge that typically only supports two connections. Packet relays can be found in firewalls between a protected and an open network, or in transcoding servers used to provide interoperation between incompatible gateways (for example, gateways which do not support compatible compression algorithms, or gateways which operate over different transmission networks such as IP or ATM). </para> <example id="ctms-The_PREndpointManagement_MBean"> <title>The PREndpointManagement MBean</title> <programlisting linenumbering="unnumbered" role="XML"><mbean code="org.mobicents.media.server.impl.jmx.enp.prl.PRTrunkManagement" name="media.mobicents:endpoint=packet-relay"> <depends>media.mobicents:service=RTPFactory,QID=1</depends> <attribute name="JndiName">media/trunk/PacketRelay</attribute> <attribute name="RtpFactoryName">java:media/mobicents/protocol/RTP</attribute> <attribute name="Channels">10</attribute> </mbean> </programlisting> </example> <formalpara> <title>Configuration of the Packet Relay Endpoint</title> <para> The configurable attributes of the Packet Relay endpoint are as follows: </para> </formalpara> <variablelist> <varlistentry> <term>JndiName</term> <listitem> <para> The JNDI name under which endpoint is to be bound. </para> </listitem> </varlistentry> <varlistentry> <term>RtpFactoryName</term> <listitem> <para> The JNDI name of <literal>RTPFactory</literal>. </para> </listitem> </varlistentry> <varlistentry> <term>Channels</term> <listitem> <para> Controls the number of announcement endpoints available in the server instance , in an endpoints pool. Endpoints are not created dynamically. At any given time, the number of endpoints in use cannot exceed the <userinput>channels</userinput> value. It is not subject to change during runtime. </para> </listitem> </varlistentry> </variablelist> <!-- </varlistentry> <varlistentry> <term>Jitter</term> <listitem> <para>The size of jitter buffer in milliseconds.</para> </listitem> </varlistentry> <varlistentry> <term>PacketizationPeriod</term> <listitem> <para>The period of media stream packetization in milliseconds.</para> </listitem> </varlistentry> <varlistentry> <term>PCMA</term> <listitem> <para>Enable or disable G711 (A-law) codec support.</para> </listitem> </varlistentry> <varlistentry> <term>PCMU</term> <listitem> <para>Enable or disable G711 (U-law) codec support.</para> </listitem> </varlistentry> </variablelist> <formalpara> <title>Supported RTP Formats</title> <para>This endpoint is able to receive the follwing RTP media types:</para> </formalpara> <informaltable frame="all"> <tgroup cols="2" align="left" colsep="1" rowsep="1"> <colspec colnum="1" colname="col1"/> <colspec colnum="2" colname="col2"/> <thead> <row> <entry>Media Type</entry> <entry>Payload Number</entry> </row> </thead> <tbody> <row> <entry>Audio: G711 (A-law) 8bit, 8kHz</entry> <entry>8</entry> </row> <row> <entry>Audio: G711 (U-law) 8bit, 8kHz</entry> <entry>0</entry> </row> </tbody> </tgroup> </informaltable> <formalpara> <title>DTMF Configuration</title> <para>The dual-tone multi-frequency (<acronym>DTMF</acronym>) configuration is determined by the DTMF attribute. The properties are as follows:</para> </formalpara> <variablelist> <varlistentry> <term>detector.mode</term> <listitem> <para>Configures DTMF detector mode. Possible values are AUTO, INBAND or RFC2833.</para> </listitem> </varlistentry> <varlistentry> <term>dtmf.payload</term> <listitem> <para>Configures the payload number <emphasis>if</emphasis> RFC2833 mode is <emphasis>also</emphasis> specified.</para> </listitem> </varlistentry> </variablelist> --> </section> <section id="ctms-Conference_Bridge_Endpoint"> <title>Conference Bridge Endpoint</title> <para> The Mobicents Media Server should be able to establish several connections between the endpoint and packet networks, or between the endpoint and other endpoints in the same gateway. The signals originating from these connections shall be mixed according to the connection <quote>mode</quote>. The precise number of connections an endpoint supports is a characteristic of the gateway, and may in fact vary according with the allocation of resources within the gateway. The conf endpoint can play an announcement directly on connections and hence only for the participant listening to an announcement, and can even detect DTMF for connection. </para> <example id="ctms-The_ConfEndpointManagement_MBean"> <title>The ConfEndpointManagement MBean</title> <programlisting linenumbering="unnumbered" role="XML"><mbean code="org.mobicents.media.server.impl.jmx.enp.cnf.ConfTrunkManagement" name="media.mobicents:endpoint=conf"> <depends>media.mobicents:service=RTPFactory,QID=1</depends> <attribute name="JndiName">media/trunk/Conference</attribute> <attribute name="RtpFactoryName"> java:media/mobicents/protocol/RTP </attribute> <attribute name="Channels">10</attribute> </mbean> </programlisting> </example> <formalpara> <title>Configuration of the Conference Bridge Endpoint</title> <para> The configurable attributes of the Conference Bridge endpoint are as follows: </para> </formalpara> <variablelist> <varlistentry> <term>JndiName</term> <listitem> <para> The JNDI name under which endpoint is to be bound. </para> </listitem> </varlistentry> <varlistentry> <term>RtpFactoryName</term> <listitem> <para> The JNDI name of <literal>RTPFactory</literal>. </para> </listitem> </varlistentry> <varlistentry> <term>Channels</term> <listitem> <para> Controls the number of announcement endpoints available in the server instance, in an endpoints pool. Endpoints are not created dynamically. At any given time, the number of endpoints in use cannot exceed the <userinput>Channels</userinput> value. It is not subject to change during runtime. </para> </listitem> </varlistentry> </variablelist> <!-- </varlistentry> <varlistentry> <term>Jitter</term> <listitem> <para>The size of jitter buffer in milliseconds.</para> </listitem> </varlistentry> <varlistentry> <term>PacketizationPeriod</term> <listitem> <para>The period of media stream packetization in milliseconds.</para> </listitem> </varlistentry> <varlistentry> <term>PCMA</term> <listitem> <para>Enable or disable G711 (A-law) codec support.</para> </listitem> </varlistentry> <varlistentry> <term>PCMU</term> <listitem> <para>Enable or disable G711 (U-law) codec support.</para> </listitem> </varlistentry> </variablelist> <formalpara> <title>Supported RTP Formats</title> <para>This endpoint is able to receive the follwing RTP media types:</para> </formalpara> <informaltable id="ctms-RTP_Formats_Supported_by_the_Conference_Bridge_Endpoint" frame="all"> <tgroup cols="2" align="left" colsep="1" rowsep="1"> <colspec colnum="1" colname="col1"/> <colspec colnum="2" colname="col2"/> <thead> <row> <entry>Media Type</entry> <entry>Payload Number</entry> </row> </thead> <tbody> <row> <entry>Audio: G711 (A-law) 8bit, 8kHz</entry> <entry>8</entry> </row> <row> <entry>Audio: G711 (U-law) 8bit, 8kHz</entry> <entry>0</entry> </row> </tbody> </tgroup> </informaltable> <formalpara> <title>DTMF Configuration</title> <para>The dual-tone multi-frequency (<acronym>DTMF</acronym>) configuration is determined by DTMF attribute. The properties are as follows:</para> </formalpara> <variablelist> <varlistentry> <term>detector.mode</term> <listitem> <para>Configures DTMF detector mode. Possible values are AUTO, INBAND or RFC2833.</para> </listitem> </varlistentry> <varlistentry> <term> <literal>dtmf.payload</literal> </term> <listitem> <para>Configures DTMF detector mode. Possible values are AUTO, INBAND and RFC2833.</para> </listitem> </varlistentry> </variablelist> --> </section> <section id="ctms-MMS_STUN_Support"> <title>MMS STUN Support</title> <para> When using Mobicents Media Server behind a routing device performing Network Address Translation, you may need to employ the Simple Traversal of User Datagram Protocol through Network Address Translators (abbreviated: <acronym>STUN</acronym>) protocol in order for the server to operate correctly. In general, it is recommended to avoid deploying the MMS behind a NAT, since doing so can incur significant performance penalties and failures. Nevertheless, the current MMS implementation does work with a static NAT, a.k.a. a one-to-one (1-1) NAT, in which no port-mapping occurs. Full Cone NAT should also work with Address-Restricted NAT. </para> <para> For more information STUN NAT classification, refer to chapter 5 of <ulink url="http://www.faqs.org/rfcs/rfc3489.html">RFC3489 - STUN - Simple Traversal of User Datagram Protocol (UDP)</ulink>. </para> <formalpara> <title>MMS STUN Configuration</title> <para> Each RTPFactory in the Media Server can have its own STUN preferences. The STUN options are specified in the <filename>jboss-service.xml</filename> configuration file. Here is an example of an RTPFactory MBean with static NAT configuration: </para> </formalpara> <example id="ctms-Static_NAT_configuration_of_an_Announcement_Endpoint_in_jboss-service.xml"> <title>Static NAT configuration of an Announcement Endpoint in jboss-service.xml</title> <programlisting linenumbering="unnumbered" role="XML"> <mbean code="org.mobicents.media.server.impl.jmx.rtp.RTPFactory" name="media.mobicents:service=RTPFactory,QID=1"> <attribute name="JndiName">java:media/mobicents/protocol/RTP</attribute> <attribute name="BindAddress">${jboss.bind.address}</attribute> <attribute name="Jitter">60</attribute> <attribute name="PacketizationPeriod">20</attribute> <attribute name="PortRange">1024-65535</attribute> <attribute name="AudioFormats"> 8 = ALAW, 8000, 8, 1; 0 = ULAW, 8000, 8, 1; 101 = telephone-event </attribute> <attribute name="UseStun">true</attribute> <attribute name="StunServerAddress">stun.ekiga.net</attribute> <attribute name="StunServerPort">3478</attribute> <attribute name="UsePortMapping">false</attribute> </mbean> </programlisting> </example> <para> There are four attributes related to STUN configuration: </para> <variablelist> <varlistentry> <term>UseStun</term> <listitem> <para> A boolean attribute which enables or disables STUN for the current endpoint. </para> </listitem> </varlistentry> <varlistentry> <term>StunServerAddress</term> <listitem> <para> A string attribute; the address of a STUN server. In the <filename>jboss-service.xml</filename> configuration file example, this attribute is set to <ulink url="stun.ekiga.net"/>. </para> </listitem> </varlistentry> <varlistentry> <term>StunServerPort</term> <listitem> <para> A string attribute representing the port number of the STUN server. <filename>jboss-service.xml</filename> configuration file example, 3478 is the port of the Ekiga server. </para> </listitem> </varlistentry> <varlistentry> <term>UsePortMapping</term> <listitem> <para> A boolean attribute that specifies whether the NAT is mapping the port numbers. A NAT is mapping ports if the internal and external ports are <emphasis>not</emphasis> guaranteed to be the same for every connection through the NAT. In other words, if the client established a connection with the NAT at the hypothetical address 111.111.111.111, on port 1024, then the NAT will establish the second leg of the connection to some different (private) address, but on the same port, such as 192.168.1.1:1024. If these ports are the same (1024), then your NAT is not mapping the ports, and you can set this attribute to false, which improves the performance of the NAT traversal by doing the STUN lookup only once at boot-time, instead of doing it every time a new connection is established. NATs that don't map ports are also known as static NATs. </para> </listitem> </varlistentry> </variablelist> </section> </chapter> <chapter id="captms-Controlling_and_Programming_the_Media_Server" lang="en-US"> <!-- chapter id nickname: captms --><title>Controlling and Programming the Mobicents Media Server</title> <section id="captms-MMS_Control_Protocols"> <title>MMS Control Protocols</title> <para> The Mobicents Media Server adopts a call control architecture where the call control <quote>intelligence</quote> is located outside of the Media Server itself, and is handled by external call control elements collectively known as Call State Control Function (CSCF).The media server assumes that these call control elements will synchronize with each other to send coherent commands and responses to the media servers under their control. Server Control Protocols is, in essence, an asynchronous master/slave protocol, where the Server Control Modules are expected to execute commands sent by CSCF. Each Server Control Module is implemented as a JSLEE application, and consists of a set of Service Building Blocks (<acronym>SBB</acronym>)s, which are in charge of communicating with media server endpoints via SPI. Such an architecture avoids difficulties with programming concurrency, low-level transaction and state-management details, connection-pooling and other complex APIs. </para> <section id="captms-Media_Gateway_Control_Protocol_Interface"> <title>Media Gateway Control Protocol Interface</title> <para> The Media Gateway Control Protocol (MGCP) is a protocol for controlling media gateways (for example, the Media Server) from external call control elements such as media gateway controllers or Call Agents. The MGCP assumes that the Call Agents, will synchronize with each other to send coherent commands and responses to the gateways under their control. </para> <para> The MGCP module is included in the binary distribution. The Call Agent uses the MGCP to tell the Media Server: </para> <itemizedlist> <listitem> <para> which events should be reported to the Call Agent; </para> </listitem> <listitem> <para> how endpoints should be connected; and, </para> </listitem> <listitem> <para> which signals should be played on which endpoints. </para> </listitem> </itemizedlist> <para> MGCP is, in essence, a master/slave protocol, where the gateways are expected to execute commands sent by the Call Agents. The general base architecture and programming interface is described in <ulink url="http://www.ietf.org/rfc/rfc2805.txt">RFC 2805</ulink>, and the current specific MGCP definition is located in <ulink url="http://www.ietf.org/rfc/rfc3435.txt">RFC 3435</ulink>. </para> <section> <title>Controller Configuration</title> <para> The MGCP interface is providedusing a series of Java beans. The configuration file <filename>mgcp-conf.xml</filename> contains the configuration information. Bean <filename>MgcpController</filename> is responsible for assembly and booting the MGCP interface. The mgcp-conf.xml configuration file is structured using the following elements. </para> <example> <title>The mgcp-conf.xml Configuration File</title> <programlisting linenumbering="unnumbered" role="XML"> <bean name="MgcpController" class="org.mobicents.media.server.ctrl.mgcp.MgcpController"> <property name="namingService"> <inject bean="MediaServer" /> </property> <property name="defaultNotifiedEntity">client@localhost></property> <property name="bindAddress">127.0.0.1</property> <property name="port">2427</property> <incallback method="addPackage" /> <uncallback method="removePackage" /> </bean> </programlisting> </example> <para> The configurable properties of the <filename>MgcpController</filename> bean include: </para> <variablelist> <varlistentry> <term><literal>defaultNotifiedEntity</literal></term> <listitem> <para> Specifies the notified entity that is provisioned on start-up. </para> </listitem> </varlistentry> <varlistentry> <term><literal>bindAddress</literal></term> <listitem> <para> Specifies the IP address on which the MGCP controller is bound. </para> </listitem> </varlistentry> <varlistentry> <term><literal>port</literal></term> <listitem> <para> Specifies the port number on which the MGCP controller is bound. The default port for the MGCP is 2728. </para> </listitem> </varlistentry> </variablelist> </section> <section> <title>Default Packages and Signals</title> <para> Endpoints and resources can be configured on demand, therefore the actual capabilities of a given endpoint may vary. To counteract this, the MGCP controller provides a method for mapping MGCP events or signals to a required resource. The following table the supported packages for default Media Server configuration for each endpoint. </para> <table frame="all" id="cpms-Supported_Default_Endpoint_Packages"> <title>Supported Default Endpoint Packages</title> <tgroup align="left" cols="4" colsep="1" rowsep="1"> <colspec colname="col1" colnum="1"/> <colspec colname="col2" colnum="2"/> <colspec colname="col3" colnum="3"/> <thead> <row> <entry> Endpoint </entry> <entry> Local Name </entry> <entry> Packages </entry> </row> </thead> <tbody> <row> <entry> Announcement Access Point </entry> <entry> <filename>/media/aap/[1..10]</filename> </entry> <entry> Announcement (A) </entry> </row> <row> <entry> Interactive Voice Response </entry> <entry> <filename>/media/IVR/[1..10]</filename> </entry> <entry> Announcement (A), Advanced Audio (AU) </entry> </row> <row> <entry> Conference Bridge </entry> <entry> <filename>/media/cnf/[1..10]</filename> </entry> <entry> Not Applicable </entry> </row> <row> <entry> Packet Relay </entry> <entry> <filename>/media/prelay/[1..10]</filename> </entry> <entry> N/A </entry> </row> <row> <entry> Echo </entry> <entry> <filename>/media/echo/[1..10]</filename> </entry> <entry> N/A </entry> <entry> Video </entry> </row> </tbody> </tgroup> </table> </section> </section> </section> <section id="captms-MMS_Control_API"> <title>MMS Control API</title> <para> The main objective of the Media Server Control API is to provide multimedia application developers with a Media Server abstraction interface. </para> <note id="captms-The_JavaDoc_for_the_MMS_Control_API"> <title>The JavaDoc for the MMS Control API</title> <para> The JavaDoc documentation for the Mobicents Media Server Control API is available here: <ulink url="http://hudson.jboss.org/hudson/job/MobicentsDocumentation/lastSuccessfulBuild/artifact/msc-api/apidocs/index.html"/>. </para> </note> <section id="captms-Basic_Components_of_the_MMS_API"> <title>Basic Components of the MMS API</title> <para> This section describes the basic objects of the API as well as some common design patterns. </para> <para> The API components consist of a related set of interfaces, classes, operations, events, capabilities, and exceptions. The API provides seven key objects common to media servers, and more advanced packages. We provide a very brief description of the API in this overview document; the seven key objects are: </para> <variablelist> <varlistentry> <term><literal>MsProvider</literal></term> <listitem> <para> Represents the <quote>window</quote> through which an application views the call processing. </para> </listitem> </varlistentry> <varlistentry> <term><literal>MsSession</literal></term> <listitem> <para> Represents a call; this object is a dynamic <emphasis>collection of physical and logical entities</emphasis> that bring two or more endpoints together. </para> </listitem> </varlistentry> <varlistentry> <term><literal>MsEndpoint</literal></term> <listitem> <para> Represents a logical endpoint (e.g., an announcement access server, or an interactive voice response server). </para> </listitem> </varlistentry> <varlistentry> <term><literal>MsConnection</literal></term> <listitem> <para> Represents the dynamic relationship between an <literal>MsSession</literal> object and a user agent. </para> </listitem> </varlistentry> <varlistentry> <term><literal>MsLink</literal></term> <listitem> <para> Represent the dynamic relationship between two endpoints located on the same Media Server. </para> </listitem> </varlistentry> <!-- <varlistentry> <term> <literal>MsSignalGenerator</literal> </term> <listitem> <warning> <title>Deprecated</title> <para>You should use <literal>MsRequestedEvent</literal> instead.</para> </warning> <para>Represents the media resource which is responsible for generating media. </para> </listitem> </varlistentry> --><!-- <varlistentry> <term> <literal>MsSignalDetector</literal> </term> <listitem> <warning> <title>Deprecated</title> <para>You should use <literal>MsRequestedEvent</literal> instead.</para> </warning> <para>Represents the media resource which is responsible for generating media.</para> </listitem> </varlistentry> --> <varlistentry> <term>MsRequestedEvent</term> <listitem> <para> The application requests the detection of certain events like DTMF on an endpoint using this API. </para> </listitem> </varlistentry> <varlistentry> <term>MsRequestedSignal</term> <listitem> <para> The application requests the application of signals on endpoints, such as the Play-on-Announcement endpoint, using this API. </para> </listitem> </varlistentry> </variablelist> <para> The purpose of an <literal>MsConnection</literal> object is to describe the relationship between an <literal>MsSession</literal> object and a user agent. An <literal>MsConnection</literal> object exists if the user agent is part of the media session. <literal>MsConnection</literal> objects are immutable in terms of their <literal>MsSession</literal> and user agent references. In other words, the <literal>MsSession</literal> and user agent object references do not change throughout the lifetime of the <literal>MsConnection</literal> object instance. The same <literal>MsConnection</literal> object may not be used in another <literal>MsSession</literal>. </para> <mediaobject id="captms-mms-MMSControlAPI-dia-MSControlAPI"> <imageobject> <imagedata align="center" fileref="images/mms-MMSControlAPI-dia-MSControlAPI.jpg" format="JPG" width="400"/> </imageobject> <caption> <para> Interface Diagram of the MMS API </para> </caption> </mediaobject> <para> <literal>MsProvider</literal> can be used to create the <literal>MsSession</literal> object and to create the instance of <literal>MsEventFactory</literal>. </para> <para> <literal>MsSession</literal> is a transient association of zero or more connections for the purposes of engaging in a real-time communication exchange. The session and its associated connection objects describe the control and media flows taking place in a communication network. Applications create instances of an <literal>MsSession</literal> object with the <function>MsProvider.createSession()</function> method, which returns an <literal>MsSession</literal> object that has zero connections and is in the <literal>IDLE</literal> state. The <literal>MsProvider</literal> object instance does not change throughout the lifetime of the <literal>MsSession</literal> object. The <literal>MsProvider</literal> object associated with an <literal>MsSession</literal> object is obtained via the <function>getProvider()</function> method. </para> <para> Applications create instances of <literal>MsConnection</literal> objects with the <function>MsSession.createNetworkConnection(String endpointName)</function> method. At this stage <literal>MsConnection</literal> is in the <literal>IDLE</literal> state. The Application calls <function>MsConnection.modify(String localDesc, String remoteDesc)</function> passing the local SDP and remote SDP. <literal>MsConnection</literal> at this time will find out the corresponding EndPoint, using JNDI, and using the <literal>endPointName</literal> passed to it. It will then call <function>createConnection(int mode)</function> to create an instance of <literal>Connection</literal>. This <literal>Connection</literal> creates an instance of <literal>RtpSocketAdaptorImpl</literal>, which opens up the socket for RTP data transfer. However, the transfer of data does not yet begin, and the state of <literal>MsConnection</literal> is <literal>HALF_OPEN</literal>. At this stage, Connection can only accept RTP packets as it has no knowledge of a peer to which to send RTP packets. If <methodname>remoteDesc</methodname> is not null, at this stage it will be applied to Connection, and now the state of <literal>MsConnection</literal> is <literal>OPEN</literal> as it knows peer SDP, and can receive as well as send RTP packets. Once <methodname>MsConnection.release()</methodname> is called, all of the resources of <literal>MsConnection</literal> are released and it transforms to the <literal>CLOSED</literal> state. <literal>MsConnection</literal> is unusable in the <literal>CLOSED</literal> state and gets garbage-collected. </para> <para> Applications create instances of <literal>MsLink</literal> objects with the <function>MsSession.createLink(MsLinkMode mode)</function> method. At this stage, <literal>MsLink</literal> is in the IDLE state. The application calls the <function>MsLink.join(String endpointName1, String endpointName2)</function>, passing the endpoint names of the two local endpoints to be joined. At this point, the <literal>MsLink</literal> object will find out the corresponding <literal>EndPoint</literal>s, using JNDI, and by using the <literal>endPointName</literal> passed to it. It will then call <function>createConnection(int mode)</function> to create an instance of the <literal>Connection</literal> object. The connections are local connections and hence no network resources are acquired (Sockets). As soon as <literal>Connection</literal>s are created for both <literal>EndPoint</literal>s, <function>setOtherParty(Connection other)</function> is called on each <literal>Connection</literal> passing the other <literal>Connection</literal>, which starts the data transfer between the two <literal>Connection</literal>s. At this stage, <literal>MsLink</literal> changes to the <literal>CONNECTED</literal> state. As soon as the application calls <methodname>MsLink.release()</methodname>, <methodname>release()</methodname> is called on the connection of respective endpoints. As soon as both of the connections are released, <literal>MsLink</literal> changes to <literal>DISCONNECTED</literal> and becomes unusable. Soon after this, <literal>MsLink</literal> gets garbage-collected. </para> <para> The application may ask to be notified about certain events occurring in an endpoint (e.g., DTMF), or the application may also request certain signals to be applied to an endpoint (e.g., Play an Announcement). To achieve this, the application needs to get an instance of <literal>MsEventFactory</literal> by calling <methodname>MsProvider.getEventFactory()</methodname> and create an instance of <literal>MsRequestedEvent</literal> to request for the notification of events or to create an instance of <literal>MsRequestedEvent</literal> to apply signals at endpoints. The application needs to pass the corresponding <literal>MsEventIdentifier</literal> as a parameter to <methodname>MsEventFactory.createRequestedEvent(MsEventIdentifier eventID)</methodname> or <methodname>MsEventFactory.createRequestedSignal(MsEventIdentifier eventID)</methodname>. The examples below will clarify this </para> </section> <section id="captms-Basic_API_Patterns_Listeners"> <title>Basic API Patterns: Listeners</title> <para> The basic programming pattern of the API is that applications (which reside <quote>above</quote> the API) make synchronous calls to API methods. The platform or network element implementing the API can inform the application of underlying events (for example, the arrival of incoming calls) by means of Java events. The application provides <literal>Listener</literal> objects corresponding to the events it is interested in obtaining. </para> <variablelist> <title><literal>Listeners</literal></title> <varlistentry> <term><literal>MsSessionListener</literal></term> <listitem> <para> Applications which are interested in receiving events for changes in state of the <literal>MsSession</literal> object should implement <literal>MsSessionListener</literal>. </para> </listitem> </varlistentry> <varlistentry> <term><literal>MsConnectionListener</literal></term> <listitem> <para> Applications which are interested in receiving events for changes of state in <literal>MsConnection</literal> should implement <literal>MsConnectionListener</literal>. </para> </listitem> </varlistentry> <varlistentry> <term><literal>MsLinkListener</literal></term> <listitem> <para> Applications which are interested in receiving events for changes in state of MsLink should implement MsLinkListener. </para> </listitem> </varlistentry> <varlistentry> <term><literal>MsResourceListener</literal></term> <listitem> <para> Applications interested in receiving events for changes in state of <literal>MsSignalDetector</literal> or <literal>MsSignalGenerator</literal> should implement <literal>MsResourceListener</literal>. </para> </listitem> </varlistentry> </variablelist> </section> <section id="captms-Events"> <title>Events</title> <para> Each of the listeners defined above listen to different types of events fired by the server. </para> <formalpara> <title>Events related to <literal>MsSession</literal></title> <para> <literal>MsSessionListener</literal> is listening for <literal>MsSessionEvent</literal>, which carries the <literal>MsSessionEventID</literal> representing an <literal>MsSession</literal> state change. The following table shows the different types of <literal>MsSessionEventID</literal>, when these events are fired, and the corresponding methods of <literal>MsSessionListener</literal> that will be called. </para> </formalpara> <informaltable frame="all" id="captms-Events_Related_to_MsSession"> <tgroup align="left" cols="3" colsep="1" rowsep="1"> <colspec colname="col1" colnum="1"/> <colspec colname="col2" colnum="2"/> <colspec colname="col3" colnum="3"/> <thead> <row> <entry> <literal>MsSessionEventID</literal> </entry> <entry> Description </entry> <entry> <literal>MsSessionListener</literal>Method Called </entry> </row> </thead> <tbody> <row> <entry> <literal>SESSION_CREATED</literal> </entry> <entry> Fired when <function>MsProvider.createSession()</function> is called and a new <literal>MsSession</literal> is created </entry> <entry> <function>public void sessionCreated(<literal>MsSessionEvent</literal> evt)</function> </entry> </row> <row> <entry> <literal>SESSION_ACTIVE</literal> </entry> <entry> When the <literal>MsConnection</literal> or <literal>MsLink</literal> is created on <literal>MsSession</literal> for the first time, it transitions to <literal>ACTIVE</literal> state and <literal>SESSION_ACTIVE</literal> is fired. Afterwards, this the state remains <literal>ACTIVE</literal> even if the application creates more <literal>MsConnections</literal> or <literal>MsLinks</literal>. </entry> <entry> <function>public void sessionActive(<literal>MsSessionEvent</literal> evt)</function> </entry> </row> <row> <entry> <literal>SESSION_INVALID</literal> </entry> <entry> When all the <literal>MsConnection</literal> or MsLink objects are disassociated from<literal>MsSession</literal>, it transitions to <literal>INVALID</literal> state and <literal>SESSION_INVALID</literal> is fired. </entry> <entry> <function>public void sessionInvalid(<literal>MsSessionEvent</literal> evt)</function> </entry> </row> </tbody> </tgroup> </informaltable> <formalpara> <title>Events Related to <literal>MsConnection</literal></title> <para> <literal>MsConnectionListener</literal> listens for an <literal>MsConnectionEvent</literal>, which carries the <literal>MsConnectionEventID</literal> that represents an <literal>MsConnection</literal> state change. The following table shows the different types of <literal>MsConnectionEventID</literal>, when these events would be fired, and the corresponding methods of <literal>MsConnectionListener</literal> that will be called. </para> </formalpara> <informaltable frame="all" id="captms-Events_Related_to_MsConnection"> <tgroup align="left" cols="3" colsep="1" rowsep="1"> <colspec colname="col1" colnum="1"/> <colspec colname="col2" colnum="2"/> <colspec colname="col3" colnum="3"/> <thead> <row> <entry> <literal>MsConnectionEventID</literal> </entry> <entry> Description </entry> <entry> <literal>MsConnectionListener</literal> Method Called </entry> </row> </thead> <tbody> <row> <entry> <literal>CONNECTION_CREATED</literal> </entry> <entry> Fired as soon as the creation of <literal>MsConnection</literal> is successful. <literal>MsConnection</literal> is not holding any resources yet. </entry> <entry> <function>public void connectionCreated(<literal>MsConnection</literal>Event event)</function> </entry> </row> <row> <entry> <literal>CONNECTION_HALF_OPEN</literal> </entry> <entry> Fired as soon as the modification of <literal>MsConnection</literal> is successful. At this stage the RTP socket is open in the Media Server to receive a stream, but has no idea about remote SDP. The application may call <methodname>MsConnection.modify(localDesc, null)</methodname>, passing <constant>null</constant> for remote SDP if the remote SDP is not known yet, and then later call modify again with the actual SDP once its known </entry> </row> <row> <entry> <literal>CONNECTION_MODIFIED</literal> </entry> <entry> As soon as <literal>MsConnection</literal> is successfully modified, by calling <function>MsConnection.modify(String localDesc, String remoteDesc)</function>, <literal>CONNECTION_MODIFIED</literal> is fired. When <function>modify()</function> is called, <literal>MsConnection</literal> checks to see whether there is an endpoint associated it and, if so, then this means it is a modification request. </entry> <entry> <function>public void connectionHalfOpen(MsConnectionEvent event);</function> </entry> </row> <row> <entry> <literal>CONNECTION_OPEN</literal> </entry> <entry> Fired as soon as the modification of <literal>MsConnection</literal> is successful and the SDP passed by the Call Agent is successfully applied to an RTP Connection. At this stage, there is a flow of RTP packets from the User Agent to the Media Server and vice versa. Its possible that the application may call <methodname>MsConnection.modify(localDesc, remoteDesc)</methodname>, passing the <methodname>remoteDesc(remote SDP)</methodname> </entry> <entry> <methodname>public void connectionOpen(MsConnectionEvent event);</methodname> </entry> </row> <row> <entry> <literal>CONNECTION_DISCONNECTED</literal> </entry> <entry> As soon as <literal>MsConnection</literal> is successfully released, <literal>MsConnection.release()</literal><literal>CONNECTION_DISCONNECTED</literal> is fired. </entry> <entry> <function>public void connectionDisconnected(MsConnectionEvent event);</function> </entry> </row> <row> <entry> <literal>CONNECTION_FAILED</literal> </entry> <entry> Fired as soon as the creation of <literal>MsConnection</literal> fails for reasons specified in <literal>MsConnectionEventCause</literal>. Immediately after <literal>CONNECTION_FAILED</literal>, <literal>CONNECTION_DISCONNECTED</literal> will be fired, giving the lister a chance to perform clean up. </entry> <entry> <methodname>public void connectionFailed(MsConnectionEvent event);</methodname> </entry> </row> </tbody> </tgroup> </informaltable> <formalpara> <title>Events Related to <literal>MsLink</literal></title> <para> <function>MsLinkListener</function> listens for an <literal>MsLinkEvent</literal> which carries the <literal>MsLinkEventID</literal> that represents an MsLink state change. The following table shows the different types of <literal>MsLinkEventID</literal>, when these events are fired, and the corresponding methods of <literal>MsLinkListener</literal> that are called. </para> </formalpara> <informaltable frame="all" id="captms-Events_Related_to_MsLink"> <tgroup align="left" cols="3" colsep="1" rowsep="1"> <colspec colname="col1" colnum="1"/> <colspec colname="col2" colnum="2"/> <colspec colname="col3" colnum="3"/> <thead> <row> <entry> <literal>MsLinkEventID</literal> </entry> <entry> Description </entry> <entry> <literal>MsLinkListener</literal> method called </entry> </row> </thead> <tbody> <row> <entry> <literal>LINK_CREATED</literal> </entry> <entry> As soon as a new <literal>MsLink</literal> is created by calling <function>MsSession.createLink(MsLinkMode mode)</function>, <literal>LINK_CREATED</literal> is fired. </entry> <entry> <function>public void linkCreated(MsLinkEvent evt)</function> </entry> </row> <row> <entry> <literal>LINK_CONNECTED</literal> </entry> <entry> Fired as soon as the <function>join(String a, String b)</function> operation of <literal>MsLink</literal> is successful. </entry> <entry> <function>public void linkConnected(MsLinkEvent evt);</function> </entry> </row> <row> <entry> <literal>LINK_DISCONNECTED</literal> </entry> <entry> Fired as soon as the <function>release()</function> operation of <literal>MsLink</literal> is successful. </entry> <entry> <function>public void linkDisconnected(MsLinkEvent evt);</function> </entry> </row> <row> <entry> <literal>LINK_FAILED</literal> </entry> <entry> Fired as soon as the <function>join(String a, String b)</function> operation of <literal>MsLink</literal> fails. </entry> <entry> <function>public void linkFailed(MsLinkEvent evt)</function> </entry> </row> </tbody> </tgroup> </informaltable> </section> <section id="captms-MSC_API_Objects_Finite_State_Machines"> <title>MSC API Objects: Finite State Machines</title> <formalpara> <title><literal>MsSession</literal>State Finite State Machine</title> <para> The behavior of <literal>MsSession</literal> is specified in terms of Finite State Machines (<acronym>FSM</acronym>s) represented by <literal>MsSessionState</literal>, shown below: </para> </formalpara> <variablelist> <varlistentry> <term><literal>IDLE</literal></term> <listitem> <para> This state indicates that the session has zero connections or links. </para> </listitem> </varlistentry> <varlistentry> <term><literal>ACTIVE</literal></term> <listitem> <para> This state indicates that the session has one or more connections or links. </para> </listitem> </varlistentry> <varlistentry> <term><literal>INVALID</literal></term> <listitem> <para> This state indicates the session has lost all of its connections or links. </para> </listitem> </varlistentry> </variablelist> <mediaobject id="captms-mms-MMSControlAPI-dia-Session"> <imageobject> <imagedata align="center" fileref="images/mms-MMSControlAPI-dia-Session.png" format="PNG" width="419"/> </imageobject> </mediaobject> <formalpara> <title><literal>MsConnection</literal> Finite State Machine</title> <para> <literal>MsConnection</literal> state is represented by the <literal>MsConnectionState</literal> enum: </para> </formalpara> <variablelist> <varlistentry> <term><literal>IDLE</literal></term> <listitem> <para> This state indicates that the <literal>MsConnection</literal> has only been created and has no resources attached to it. </para> </listitem> </varlistentry> <varlistentry> <term><literal>HALF_OPEN</literal></term> <listitem> <para> This state indicates that the <literal>MsConnection</literal> has created the RTP socket, but doesn't yet have any information about Remote SDP to send the RTP Packets. <literal>MsConnection</literal> is still usable in <literal>HALF_OPEN</literal> state if it is only receiving the RTP Packets but doesn't have to send any. </para> </listitem> </varlistentry> <varlistentry> <term><literal>OPEN</literal></term> <listitem> <para> This state indicates that the <literal>MsConnection</literal> now has information about remote SDP and can send RTP Packates to the remote IP (for example, to a remote user agent). </para> </listitem> </varlistentry> <varlistentry> <term><literal>FAILED</literal></term> <listitem> <para> This state indicates that the creation or modification of <literal>MsConnection</literal> failed, and that the <literal>MsConnection</literal> object isn't reusable anymore. </para> </listitem> </varlistentry> <varlistentry> <term><literal>CLOSED</literal></term> <listitem> <para> This state indicates that <literal>MsConnection</literal> has released all its resources and closed the RTP sockets. It is not usable any more. </para> </listitem> </varlistentry> </variablelist> <formalpara> <title><literal>MsLink</literal> Finite State Machine</title> <para> <literal>MsLink</literal> state is represented by the <literal>MsLinkState</literal> enum: </para> </formalpara> <variablelist> <varlistentry> <term><literal>IDLE</literal></term> <listitem> <para> This state indicates that the <literal>MsLink</literal> has been created and has no endpoints associated with it. </para> </listitem> </varlistentry> <varlistentry> <term><literal>CONNECTED</literal></term> <listitem> <para> This state indicates that the connections from both endpoints have been created and that data transfer has started. </para> </listitem> </varlistentry> <varlistentry> <term><literal>FAILED</literal></term> <listitem> <para> This state indicates that the creation of <literal>MsLink</literal> failed and is not usable anymore. </para> </listitem> </varlistentry> <varlistentry> <term><literal>DISCONNECTED</literal></term> <listitem> <para> This state indicates that <literal>MsLink</literal> has closed the connections of both endpoints and is not usable anymore. </para> </listitem> </varlistentry> </variablelist> </section> <section id="captms-API_Methods_and_Usage"> <title>API Methods and Usage</title> <para> So far we have specified the key objects as well as their Finite State Machines (<acronym>FSM</acronym>s). To understand operationally how these objects are used and the methods they provide, we can look at the UML sequence diagram examples. The following call flow depicts a simple announcement. </para> <para> Click to see the <ulink url="http://mobicents-public.googlegroups.com/web/sas-MMSControlAPI-dia-IVRMsConnectionAPI.png?gda=hEmmFl4AAAAF_VX0TG5xx-FBSRUj3rSwgeNEfkM5quPf0dNuRU50JeoDajkVeSsnUQ5nTudipElRBm39yBjFjuPyiOBf15ilwxyWU4Owty8oB440nFYg8OOwpdWz5ftt1dlzlu5J-bE">Announcement call flow diagram</ulink>. </para> <!-- <mediaobject id="captms-mms-MMSControlAPI-dia-IVRMSConnectionAPI"> <imageobject> <imagedata align="center" width="700" fileref="images/mms-MMSControlAPI-dia-IVRMSConnectionAPI.png" format="PNG" /> </imageobject> </mediaobject> --><!-- <remark>TBD: Replace this orderedlist with a callout list once the graphic is remade.</remark> --><!-- <orderedlist> <listitem> <para>The application receives an underlying signaling message.</para> </listitem> <listitem> <para>The application registers listeners.</para> </listitem> <listitem> <para>The application registers listeners.</para> </listitem> <listitem> <para>The application registers listeners.</para> </listitem> <listitem> <para>The application creates an <literal>MsSession</literal> object.</para> </listitem> <listitem> <para>The application creates an <literal>MsConnection</literal> object using the <literal>MsSession</literal> object.</para> </listitem> <listitem> <para>The application modifies <literal>MsConnection</literal>, passing the SDP descriptor received on the signaling channel.</para> </listitem> <listitem> <para> The <literal>MsConnection</literal> implementation sends a request to the media server using one of the control protocols.</para> </listitem> <listitem> <para>The server responds that the media server connection has been created.</para> </listitem> <listitem> <para>The application receives <literal>ConnectionEvent.CONNECTION_CREATED</literal>.</para> </listitem> <listitem> <para>The application obtains the server's SDP and sends a response to the user agent. Media conversation has started.</para> </listitem> <listitem> <para>The application creates a <literal>SignalGenerator</literal> object and asks it to play an announcement.</para> </listitem> <listitem> <para>The application creates a <literal>SignalGenerator</literal> object and asks it to play an announcement.</para> </listitem> <listitem> <para>The application creates a <literal>SignalGenerator</literal> and asks it to play the announcement.</para> </listitem> <listitem> <para>The server reports that the announcement is complete.</para> </listitem> <listitem> <para>The server reports that the announcement is complete.</para> </listitem> </orderedlist> --> <example id="captms-MSC_API_Example_Code"> <title>MSC API Example Code</title> <programlisting linenumbering="unnumbered" role="JAVA"> /** * This is just a psuedocode to show how to use the MSC Api. This example uses * the Announcement Endpoint to play an announcement * * user agent <----> RTP Connection <--- Announcement Endpoint * * @author amit bhayani * */ public class AnnouncementExample implements MsSessionListener, MsConnectionListener { private MsProvider msProvider; private MsSession msSession; public void startMedia(String remoteDesc) { // Creating the provider MsProvider provider = new MsProviderImpl(); // Registering the Listeners provider.addSessionListener(this); provider.addConnectionListener(this); // Creating the Session msSession = provider.createSession(); // Creating the connection passing the Endpoint Name. Here we are // creating Announcement Endpoint which will be connected to User Agent // (remoteDesc is SDP of remote end) MsConnection msCOnnection = msSession.createNetworkConnection("media/trunk/Announcement/$"); // Get the Remote SDP here and pass it to connection. If creation of // connection is successful connectionCreated method will be called msCOnnection.modify("$", remoteDesc); } public void sessionActive(MsSessionEvent evt) { // TODO Auto-generated method stub } public void sessionCreated(MsSessionEvent evt) { // TODO Auto-generated method stub } public void sessionInvalid(MsSessionEvent evt) { // TODO Auto-generated method stub } public void connectionCreated(MsConnectionEvent event) { MsConnection connection = event.getConnection(); MsEndpoint endpoint = connection.getEndpoint(); // This is the actualname, could be something like // 'media/trunk/Announcement/enp-1' String endpointName = endpoint.getLocalName(); // URL to play audio file. String url= "http://something/mobicents.wav"; MsEventFactory eventFactory = msProvider.getEventFactory(); MsPlayRequestedSignal play = null; play = (MsPlayRequestedSignal) eventFactory.createRequestedSignal(MsAnnouncement.PLAY); play.setURL(url); // Let us request for Announcement Complete event or Failure in case // if it happens MsRequestedEvent onCompleted = null; MsRequestedEvent onFailed = null; onCompleted = eventFactory.createRequestedEvent(MsAnnouncement.COMPLETED); onCompleted.setEventAction(MsEventAction.NOTIFY); onFailed = eventFactory.createRequestedEvent(MsAnnouncement.FAILED); onFailed.setEventAction(MsEventAction.NOTIFY); MsRequestedSignal[] requestedSignals = new MsRequestedSignal[] { play }; MsRequestedEvent[] requestedEvents = new MsRequestedEvent[] { onCompleted, onFailed }; endpoint.execute(requestedSignals, requestedEvents, connection); } public void connectionDisconnected(MsConnectionEvent event) { // TODO Auto-generated method stub } public void connectionFailed(MsConnectionEvent event) { // TODO Auto-generated method stub } public void connectionHalfOpen(MsConnectionEvent event) { // TODO Auto-generated method stub } public void connectionOpen(MsConnectionEvent event) { // TODO Auto-generated method stub } } </programlisting> </example> <example id="captms-DTMF_Listener_Example_Code"> <title>DTMF Listener Example Code</title> <programlisting linenumbering="unnumbered" role="JAVA"> // Example that shows how to listen for DTMF. For simplicity removed all imports and other code public class IVRExample implements MsSessionListener, MsConnectionListener, MsNotificationListener { public void startMedia(String remoteDesc) { // Creating the provider MsProvider provider = new MsProviderImpl(); // Registering the Listeners provider.addSessionListener(this); provider.addConnectionListener(this); provider.addNotificationListener(this); // Creating the Session msSession = provider.createSession(); // Creating the connection passing the Endpoint Name. Here we are // creating Announcement Endpoint which will be connected to User Agent // (remoteDesc is SDP of remote end) MsConnection msConnection = msSession.createNetworkConnection("media/trunk/IVR/$"); // Get the Remote SDP here and pass it to connection. If creation of // connection is successful connectionCreated method will be called msConnection.modify("$", remoteDesc); } public void connectionCreated(MsConnectionEvent event) { MsConnection connection = event.getConnection(); MsEndpoint endpoint = connection.getEndpoint(); // This is the actualname, could be something like // 'media/trunk/Announcement/enp-1' String endpointName = endpoint.getLocalName(); MsEventFactory factory = msProvider.getEventFactory(); MsDtmfRequestedEvent dtmf = (MsDtmfRequestedEvent) factory.createRequestedEvent(DTMF.TONE); MsRequestedSignal[] signals = new MsRequestedSignal[] {}; MsRequestedEvent[] events = new MsRequestedEvent[] { dtmf }; endpoint.execute(signals, events, connection); } public void update(MsNotifyEvent evt) { MsEventIdentifier identifier = evt.getEventID(); if (identifier.equals(DTMF.TONE)) { MsDtmfNotifyEvent event = (MsDtmfNotifyEvent) evt; String seq = event.getSequence(); if (seq.equals("0")) { } else if (seq.equals("1")) { } else if (seq.equals("2")) { } else if (seq.equals("3")) { } else if (seq.equals("4")) { } else if (seq.equals("5")) { } else if (seq.equals("6")) { } else if (seq.equals("7")) { } else if (seq.equals("8")) { } else if (seq.equals("9")) { } } } } </programlisting> </example> <example id="captms-DTMF_Signal_Example_Code"> <title>DTMF Signal to Endpoint Example Code</title> <programlisting linenumbering="unnumbered" role="JAVA"> // Example that shows how DTMF signal can be applied to Endpoint MsEventFactory eventFactory = msProvider.getEventFactory(); MsRequestedSignal dtmf = eventFactory.createRequestedSignal(DTMF.TONE); dtmf.setTone("1"); MsRequestedSignal[] signals = new MsRequestedSignal[] { dtmf }; MsRequestedEvent[] events = new MsRequestedEvent[]; msEndpoint.execute(signals, events, connection); </programlisting> </example> <example id="captms-FAILED_Event_Example_Code"> <title>Record and Listen FAILED Event Example Code</title> <programlisting linenumbering="unnumbered" role="JAVA"> // Example that shows how to begin recording and listen for FAILED event String RECORDER = "file://home/user/recordedfile.wav"; MsEventFactory eventFactory = msProvider.getEventFactory(); MsRecordRequestedSignal record = (MsRecordRequestedSignal) eventFactory.createRequestedSignal(MsAudio.RECORD); record.setFile(RECORDER); MsRequestedEvent onFailed = eventFactory.createRequestedEvent(MsAudio.FAILED); onFailed.setEventAction(MsEventAction.NOTIFY); MsRequestedSignal[] requestedSignals = new MsRequestedSignal[] { record }; MsRequestedEvent[] requestedEvents = new MsRequestedEvent[] { onFailed }; endpoint.execute(requestedSignals, requestedEvents, connection); // NOTE: Passing empty MsRequestedSignal[] and MsRequestedEvent[] will nullify all previous MsRequestedSignal and MsRequestedEvent </programlisting> </example> </section> </section> </chapter> <chapter id="msep-MS-Event_Packages" lang="en-US"> <!-- chapter id nickname: msep --><title>MMS: Event Packages</title> <formalpara> <title>The Basic Package</title> <para> Package name: <literal>org.mobicents.media.server.spi.events.Basic</literal> </para> </formalpara> <informaltable frame="all" id="msep-The_Basic_Package"> <tgroup align="left" cols="4" colsep="1" rowsep="1"> <colspec colname="col1" colnum="1"/> <colspec colname="col2" colnum="2"/> <colspec colname="col3" colnum="3"/> <colspec colname="col4" colnum="4"/> <thead> <row> <entry> Event ID </entry> <entry> Description </entry> <entry> Type </entry> <entry> Duration </entry> </row> </thead> <tbody> <row> <entry> <literal>org.mobicents.media.server.spi.events.Basic.DTMF</literal> </entry> <entry> DTMF Event </entry> <entry> BR </entry> <entry> </entry> </row> </tbody> </tgroup> </informaltable> <formalpara> <title>The Announcement Package</title> <para> Package name: <literal>org.mobicents.media.server.spi.event.Announcement</literal> </para> </formalpara> <informaltable frame="all" id="msep-The_Announcement_Package"> <tgroup align="left" cols="4" colsep="1" rowsep="1"> <colspec colname="col1" colnum="1"/> <colspec colname="col2" colnum="2"/> <colspec colname="col3" colnum="3"/> <colspec colname="col4" colnum="4"/> <thead> <row> <entry> Event ID </entry> <entry> Description </entry> <entry> Type </entry> <entry> Duration </entry> </row> </thead> <tbody> <row> <entry> <literal>org.mobicents.media.server.spi.event.Announcement.PLAY</literal> </entry> <entry> play an announcement </entry> <entry> TO </entry> <entry> Variable </entry> </row> <row> <entry> <literal>org.mobicents.media.server.spi.event.Announcement.COMPLETED</literal> </entry> <entry> </entry> <entry> </entry> <entry> </entry> </row> <row> <entry> <literal>org.mobicents.media.server.spi.event.Announcement.FAILED</literal> </entry> <entry> </entry> <entry> </entry> <entry> </entry> </row> </tbody> </tgroup> </informaltable> <para> Announcement actions are qualified by URLs and by sets of initial parameters. The <quote>operation completed</quote> (<literal>COMPLETED</literal>) event will be detected once an announcement has finished playing. If the announcement cannot be played in its entirety, an <quote>operation failure</quote> (<literal>FAILED</literal>) event can be returned. The failure can also be explained with a commentary. </para> <formalpara> <title>The Advanced Audio Package</title> <para> Package name: <literal>org.mobicents.media.server.spi.events.AU</literal> </para> </formalpara> <informaltable frame="all" id="msep-The_Advanced_Audio_Package"> <tgroup align="left" cols="4" colsep="1" rowsep="1"> <colspec colname="col1" colnum="1"/> <colspec colname="col2" colnum="2"/> <colspec colname="col3" colnum="3"/> <colspec colname="col4" colnum="4"/> <thead> <row> <entry> Event ID </entry> <entry> Description </entry> <entry> Type </entry> <entry> Duration </entry> </row> </thead> <tbody> <row> <entry> <literal>org.mobicents.media.server.spi.event.AU.PLAY_RECORD</literal> </entry> <entry> Play a prompt (optional) and then record some speech </entry> <entry> TO </entry> <entry> Variable </entry> </row> <row> <entry> <literal>org.mobicents.media.server.spi.event.AU.PROMPT_AND_COLLECT</literal> </entry> <entry> </entry> <entry> </entry> <entry> </entry> </row> <row> <entry> <literal>org.mobicents.media.server.spi.event.Announcement.FAILED</literal> </entry> <entry> </entry> <entry> </entry> <entry> </entry> </row> </tbody> </tgroup> </informaltable> <para> The function of <literal>PLAY_RECORD</literal> is to play a prompt and record the user's speech. If the user does not speak, the user may be re-prompted and given another chance to record. By default, <literal>PLAY_RECORD</literal> does not play an initial prompt, makes only one attempt to record, and therefore functions as a simple record operation </para> </chapter> <chapter id="msde-MS_Demonstration_Example" lang="en-US"> <!-- chapter id nickname: msde --><title>MMS Demonstration Example</title> <para> The motive of this example is to demonstrate the capabilities of new Media Server (MS) and Media Server Resource Adapters (MSC-RA). </para> <para> The example demonstrates the usage of the following Endpoints: </para> <itemizedlist> <listitem> <para> Announcement </para> </listitem> <listitem> <para> Packet Relay </para> </listitem> <listitem> <para> Loop </para> </listitem> <listitem> <para> Conference </para> </listitem> <listitem> <para> IVR </para> </listitem> </itemizedlist> <para> For more information on each of these types of endpoints, refer to <xref linkend="ittms-Media_Server_Architecture"/>. </para> <formalpara> <title>Where is the Code?</title> <para> Check out the 'mms-demo' example from <ulink url="http://code.google.com/p/mobicents/source/browse/#svn/branches/servers/media/1.x.y/examples/mms-demo"/>. </para> </formalpara> <formalpara> <title>Install and Run</title> <para> Start the Mobicents Server (this will also start Media Server). Make sure you have <filename>server/default/deploy/mobicents.sar</filename> and <filename>server/default/deploy/mediaserver.sar</filename> in your Mobicents Server </para> </formalpara> <formalpara> <title>From Binary</title> <para> Go to <filename>/examples/mms-demo</filename> and call 'ant deploy-all'. This will deploy the SIP RA, MSC RA, the mms-demo example and also mms-demo-audio.war. The war file contains the audio *.wav files that are used by mms-demo example. </para> </formalpara> <formalpara> <title>From Source Code</title> <para> If you are deploying from source code, you may deploy each of the resource adapters individually </para> </formalpara> <itemizedlist> <listitem> <para> make sure <envar>JBOSS_HOME</envar> is set and the server is running. </para> </listitem> <listitem> <para> Call mvn install from <filename>servers/jain-slee/resources/sip</filename> to deploy SIP RA </para> </listitem> <listitem> <para> Call mvn install from <filename>servers/media/controllers/msc</filename> to deploy media RA </para> </listitem> <listitem> <para> Call mvn install from <filename>servers/media/examples/mms-demo</filename> to deploy example </para> </listitem> </itemizedlist> <para> Once the example is deployed, make a call from your SIP Phone to TBD. </para> <formalpara> <title>1010: Loop Endpoint Usage Demonstration</title> <para> As soon as the call is established CallSbb creates a Connection using PREndpointImpl. PREndpointImpl has two Connections, one connected to calling UA by calling msConnection.modify("$", sdp). Once the connection is established CallSbb creates child LoopDemoSbb and calls startDemo() on it passing the PREndpoint name as argument. LoopDemoSbb creates child AnnouncementSbb which uses the AnnEndpointImpl to make an announcement. The other Connection of PREndpointImpl is connected to Connection from AnnEndpointImpl using the MsLink. </para> </formalpara> <programlisting linenumbering="unnumbered" role="JAVA"> MsLink link = session.createLink(MsLink.MODE_FULL_DUPLEX); .... ... link.join(userEndpoint, ANNOUNCEMENT_ENDPOINT); </programlisting> <para> Once the link is created (look at onLinkConnected() ) <literal>AnnouncementSbb</literal> creates the instance of <literal>MsPlayRequestedSignal</literal> and sets the path of audio url. <literal>AnnouncementSbb</literal> also creates an instance of <literal>MsRequestedEvent</literal> for <constant>MsAnnouncement.COMPLETED</constant> and <constant>MsAnnouncement.FAILED</constant> such that the Media resource adapter fires respective events and the SBB has a handler for the <constant>org.mobicents.media.events.announcement.COMPLETED</constant> event to handle <literal>Announcement Complete</literal>. </para> <programlisting linenumbering="unnumbered" role="JAVA"> MsEventFactory eventFactory = msProvider.getEventFactory(); MsPlayRequestedSignal play = null; play = (MsPlayRequestedSignal) eventFactory.createRequestedSignal(MsAnnouncement.PLAY); play.setURL(url); MsRequestedEvent onCompleted = null; MsRequestedEvent onFailed = null; onCompleted = eventFactory.createRequestedEvent(MsAnnouncement.COMPLETED); onCompleted.setEventAction(MsEventAction.NOTIFY); onFailed = eventFactory.createRequestedEvent(MsAnnouncement.FAILED); onFailed.setEventAction(MsEventAction.NOTIFY); MsRequestedSignal[] requestedSignals = new MsRequestedSignal[]{play}; MsRequestedEvent[] requestedEvents = new MsRequestedEvent[]{onCompleted, onFailed}; link.getEndpoints()[1].execute(requestedSignals, requestedEvents, link); </programlisting> <mediaobject id="msde-mms-MMSDemonstrationEx-dia-AnnEndpointImpl"> <imageobject> <imagedata align="center" fileref="images/mms-MMSDemonstrationEx-dia-AnnEndpointImpl.png" format="PNG" scalefit="1" width="450"/> </imageobject> <caption> <para> Announcement Endpoint </para> </caption> </mediaobject> <para> As soon as the announcement is over <literal>LoopDemoSbb</literal> creates child <literal>LoopbackSbb</literal> and calls <methodname>startConversation()</methodname> on it, passing the <literal>PREndpoint</literal> name as argument. <literal>LoopbackSbb</literal> uses <literal>MsLink</literal> to associate the other connection of <literal>PREndpointImpl</literal> to <literal>LoopEndpointImpl</literal>. <literal>LoopEndpointImpl</literal> simply forwards the voice packet received from caller back to caller. </para> <programlisting linenumbering="unnumbered" role="JAVA"> MsLink link = session.createLink(MsLink.MODE_FULL_DUPLEX); ....... ... link.join(endpointName, LOOP_ENDPOINT); </programlisting> <mediaobject id="msde-mms-MMSDemonstrationEx-dia-LoopEndpointImpl"> <imageobject> <imagedata align="center" fileref="images/mms-MMSDemonstrationEx-dia-LoopEndpointImpl.png" format="PNG" scalefit="1" width="450"/> </imageobject> <caption> <para> Loop Endpoint </para> </caption> </mediaobject> <mediaobject id="msde-mms-MMSDemonstrationEx-dia-LoopDemoSbb"> <imageobject> <imagedata align="center" fileref="images/mms-MMSDemonstrationEx-dia-LoopDemoSbb.png" format="PNG" width="432"/> </imageobject> <caption> <para> The SBB Child Relation Diagram </para> </caption> </mediaobject> <formalpara> <title>1011: DTMF Usage Demonstration</title> <para> As soon as the call is established CallSbb creates a Connection using PREndpointImpl. PREndpointImpl has two Connections, one connected to calling UA by calling msConnection.modify("$", sdp). Once the connection is established CallSbb creates child DtmfDemoSbb and calls startDemo() on it passing the PREndpoint name as argument. DtmfDemoSbb creates child AnnouncementSbb which uses the AnnEndpointImpl to make an announcement. The other Connection of PREndpointImpl is connected to Connection from AnnEndpointImpl using the MsLink. </para> </formalpara> <programlisting linenumbering="unnumbered" role="JAVA"> MsLink link = session.createLink(MsLink.MODE_FULL_DUPLEX); .... ... link.join(userEndpoint, ANNOUNCEMENT_ENDPOINT); </programlisting> <para> Once the link is created the flow is the same as that of 1010 to play the announcement. </para> <mediaobject id="msde-mms-MMSDemonstrationEx-dia-AnnEndpointImpl-2"> <imageobject> <imagedata align="center" fileref="images/mms-MMSDemonstrationEx-dia-AnnEndpointImpl.png" format="PNG" scalefit="1" width="450"/> </imageobject> <caption> <para> Announcement Endpoint Implementation </para> </caption> </mediaobject> <para> As soon as announcement is over DtmfDemoSbb creates instance of MsDtmfRequestedEvent and applies it on IVREndpoint. Look at onAnnouncementComplete() method of DtmfDemoSbb </para> <programlisting linenumbering="unnumbered" role="JAVA"> MsLink link = (MsLink) evt.getSource(); MsEndpoint ivr = link.getEndpoints()[1]; MsEventFactory factory = msProvider.getEventFactory(); MsDtmfRequestedEvent dtmf = (MsDtmfRequestedEvent) factory.createRequestedEvent(DTMF.TONE); MsRequestedSignal[] signals = new MsRequestedSignal[]{}; MsRequestedEvent[] events = new MsRequestedEvent[]{dtmf}; ivr.execute(signals, events, link); </programlisting> <para> On every DTMF received DtmfDemoSbb plays corresponding WAV file using the AnnouncementSbb as explained above. </para> <mediaobject id="msde-mms-MMSDemonstrationEx-dia-DTMFDemoSbb"> <imageobject> <imagedata align="center" fileref="images/mms-MMSDemonstrationEx-dia-DTMFDemoSbb.png" format="PNG" width="192"/> </imageobject> <caption> <para> The SBB Child Relation Diagram </para> </caption> </mediaobject> <formalpara> <title>1012: ConfEndpointImpl Usage Demonstration</title> <para> As soon as the call is established CallSbb creates a Connection using PREndpointImpl. PREndpointImpl has two Connections, one connected to calling UA by calling msConnection.modify("$", sdp). Once the connection is established CallSbb creates child ConfDemoSbb and calls startDemo() on it passing the PREndpoint name as argument. ConfDemoSbb creates child AnnouncementSbb which uses the AnnEndpointImpl to make an announcement. The other Connection of PREndpointImpl is connected to Connection from AnnEndpointImpl using the MsLink. </para> </formalpara> <programlisting linenumbering="unnumbered" role="JAVA">.... MsLink link = session.createLink(MsLink.MODE_FULL_DUPLEX); .... ... link.join(userEndpoint, ANNOUNCEMENT_ENDPOINT); </programlisting> <para> Once the link is created the flow is the same as that of 1010 to play the announcement. </para> <mediaobject id="msde-mms-MMSDemonstrationEx-dia-AnnEndpointImpl-3"> <imageobject> <imagedata align="center" fileref="images/mms-MMSDemonstrationEx-dia-AnnEndpointImpl.png" format="PNG" scalefit="1" width="450"/> </imageobject> <caption> <para> Announcement Endpoint Implementation </para> </caption> </mediaobject> <para> As soon as announcement is over ConfDemoSbb creates child ForestSbb and calls enter() on it passing the PREndpoint name as argument. ForestSbb uses MsLink to associate the other Connection of PREndpointImpl to ConfEndpointImpl: </para> <programlisting linenumbering="unnumbered" role="JAVA"> MsLink link = session.createLink(MsLink.MODE_FULL_DUPLEX); link.join(endpointName, CNF_ENDPOINT); </programlisting> <para> Once the link is established (Look at onConfBridgeCreated() ) ForestSbb creates many child AnnouncementSbb each responsible for unique announcement (in this case playing crickets.wav and mocking.wav). So now UA is actually listening to many announcements at same go. </para> <mediaobject id="msde-mms-MMSDemonstrationEx-dia-ConfEndpointImpl"> <imageobject> <imagedata align="center" fileref="images/mms-MMSDemonstrationEx-dia-ConfEndpointImpl.png" format="PNG" scalefit="1" width="450"/> </imageobject> <caption> <para> Conference Endpoint Implementation </para> </caption> </mediaobject> <mediaobject id="msde-mms-MMSDemonstrationEx-dia-ConfDemoSbb"> <imageobject> <imagedata align="center" fileref="images/mms-MMSDemonstrationEx-dia-ConfDemoSbb.png" format="PNG" width="450"/> </imageobject> <caption> <para> SBB Child Relation </para> </caption> </mediaobject> <formalpara> <title>Recording Usage Demonstration</title> <para> As soon as the call is established, <literal>CallSbb</literal> creates a Connection using <literal>PREndpointImpl</literal>. <literal>PREndpointImpl</literal> has two Connections, one connection to the calling User Agent by calling <methodname>msConnection.modify("$", sdp)</methodname>. Once the connection is established, <literal>CallSbb</literal> creates child <literal>RecorderDemoSbb</literal> and calls <methodname>startDemo()</methodname> on it, passing the <literal>PREndpoint</literal> name as an argument. <literal>RecorderDemoSbb</literal> creates child <literal>AnnouncementSbb</literal> which uses the <literal>AnnEndpointImpl</literal> to make an announcement. The other Connection of <literal>PREndpointImpl</literal> is connected to Connection from <literal>AnnEndpointImpl</literal> using the <literal>MsLink</literal>. </para> </formalpara> </chapter> <chapter id="msbp-MS-Best_Practices" lang="en-US"> <!-- chapter id nickname: msbp --><title>MMS: Best Practices</title> <section> <title>Mobicents Media Server Best Practices</title> <para> Note: these best practices apply to Mobicents Media Server version 1.0.0.CR6 and later </para> <section> <title>DTMF Detection Mode: RFC2833 versus Inband versus Auto</title> <para> The Mobicents Media Server will block the resource depending on the DTMF detection mode configured in <emphasis>jboss-service.xml</emphasis> at start-up time. Inband is highly resource-intensive and must perform many more calculations in order to detect DTMF when compared to RFC2833. So if your application already knows that User Agents (UAs) support RFC2833, it is always better to configure DTMF mode as <emphasis>RFC2833</emphasis> rather than as <emphasis>Inband</emphasis> or <emphasis>Auto</emphasis>. Also, please note that <emphasis>Auto</emphasis> is even more resource-intensive because it does not know beforehand whether DTMF would be Inband or <emphasis>RFC2833</emphasis>, and hence both detection methods must be started. The default detection mode is <emphasis>RFC2833</emphasis>. </para> <para> All of the Conference, Packet Relay and IVR endpoints have DTMF detection enabled; the mode can be configured using <emphasis>jboss-service.xml</emphasis>. We advise retaining the same mode for all three, but this is not a necessity. </para> </section> <section> <title>Transcoding Is CPU-Intensive</title> <para> Digital Signal Processing (DSP) is very costly and should be avoided as much as possible. By default, Announcement endpoints and IVR endpoints do not have DSP enabled. What this means is that your application needs to know beforehand which codecs are supported by your UA; you can then ask Announcement or IVR to play an audio file which has been pre-encoded in one of these formats. The onus of deciding which pre-encoded file to play lies with the application. For example, if I am writing a simple announcement application that would only play announcements to the end user, and I know that my end users have one of either the <emphasis>PCMU</emphasis> or <emphasis>GSM</emphasis> codecs, then I would make sure to have pre-encoded audio files such as <emphasis>helloworld-pcmu.wav</emphasis> and <emphasis>helloworld-gsm.gsm</emphasis>. Then, when the UA attempts to connect to the Media Server, my application knows which codecs the UA supports and can ask the Media Server to play the respective file. </para> <para> This strategy will work fine because, most of the time in the telecommunications world, applications have a known set of supported codecs, .However if this is not true, or if you are writing a simple demo application and need or want all codecs to be supported, you can put a Packet Relay endpoint in front of Announcement or IVR endpoint. This way, the Packet Relay will do all necessary digital signal processing, and your application need not bother about which audio file to play. The audio file in this case will be encoded in <emphasis>Linear</emphasis> format, and all UAs, irrespective of whether they support <emphasis>PCMU</emphasis>, <emphasis>PCMA</emphasis>, <emphasis>Speex</emphasis>, <emphasis>G729</emphasis> or <emphasis>GSM</emphasis> codecs, would be able to hear the announcement. </para> </section> <section> <title>Conference Endpoints block the Number of Connections at Start Time</title> <para> The Conference endpoint starts all of the connections at boot time. This means that Conference blocks all the necessary resources at start time even if UAs are not yet connected. In our experience, this is required because resource allocation at runtime causes jitter for the other participants. Due to this, there is cap on the maximum number of connections a conference can handle which takes effect at start time. By default, this number is set to five in <emphasis>jboss-service.xml</emphasis>: </para> <screen><attribute name="MaxConnections">5</attribute> </screen> <para> If your requirements are such that your application will have conferences ranging from five to ten simultaneous users, it is best to define two or more <emphasis>ConfTrunkManagement</emphasis> <emphasis>MBean</emphasis>s, and allow your application to use the correct Conference endpoint rather than changing the value of <emphasis>MaxConnections</emphasis> to "10" for all. For example: </para> <screen> <mbean code="org.mobicents.media.server.impl.jmx.enp.cnf.ConfTrunkManagement" name="media.mobicents:endpoint=conf"> <depends>media.mobicents:service=RTPManager,QID=1</depends> <attribute name="JndiName">media/trunk/Conference5</attribute> <attribute name="RtpFactoryName">java:media/mobicents/protocol/RTP</attribute> <attribute name="Channels">1</attribute> <attribute name="DtmfMode">RFC2833</attribute> <!--MaxConnections represents the maximum number of participants who can join a conference. Use judiciously: this blocks resources at MMS startup--> <attribute name="MaxConnections">5</attribute> </mbean> </screen> <screen> <mbean code="org.mobicents.media.server.impl.jmx.enp.cnf.ConfTrunkManagement" name="media.mobicents:endpoint=conf"> <depends>media.mobicents:service=RTPManager,QID=1</depends> <attribute name="JndiName">media/trunk/Conference7</attribute> <attribute name="RtpFactoryName">java:media/mobicents/protocol/RTP</attribute> <attribute name="Channels">1</attribute> <attribute name="DtmfMode">RFC2833</attribute> <!--MaxConnections represents the maximum number of participants who can join a conference. Use judiciously: this blocks resources at MMS startup--> <attribute name="MaxConnections">7</attribute> </mbean> </screen> <screen> <mbean code="org.mobicents.media.server.impl.jmx.enp.cnf.ConfTrunkManagement" name="media.mobicents:endpoint=conf"> <depends>media.mobicents:service=RTPManager,QID=1</depends> <attribute name="JndiName">media/trunk/Conference10</attribute> <attribute name="RtpFactoryName">java:media/mobicents/protocol/RTP</attribute> <attribute name="Channels">1</attribute> <attribute name="DtmfMode">RFC2833</attribute> <!--MaxConnections represents the maximum number of participants who can join a conference. Use judiciously: this blocks resources at MMS startup--> <attribute name="MaxConnections">10</attribute> </mbean> </screen> <para> Finally, ensure that you configure <emphasis>Channels</emphasis> carefully: this represents the number of conferences that can occur concurrently. </para> </section> </section> </chapter> <appendix lang="en-US"> <title>Understanding Digital Signal Processing and Streaming</title> <para> The following information provides a basic introduction to Digital Signal Processing, and Streaming technologies. These two technologies are used extensively in the Media Server, therefore understanding these concepts will assist developers in creating customized media services for the Media Server. </para> <section> <title>Introduction to Digital Signal Processing</title> <para> Digital Signal Processing, as the name suggests, is the processing of signals by digital means. A signal in this context can mean a number of different things. Historically the origins of signal processing are in electrical engineering, and a signal here means an electrical signal carried by a wire or telephone line, or perhaps by a radio wave. More generally, however, a signal is a stream of information representing anything from stock prices to data from a remote-sensing satellite. The term "digital" originates from the word "digit", meaning a number, therefore "digital" literally means numerical. This introduction to DSP will focus primary on two types digital signals: audio and voice. </para> </section> <section> <title>Analog and Digital Signals</title> <para> Data can already be in a digital format (for example, the data stream from a Compact Disk player), and will not require any coversion. In many cases however, a signal is received in the form of an analog electrical voltage or current, produced by a microphone or other type of transducer. Before DSP techniques can be applied to an analog signal, it must be converted into digital form. Analog electrical voltage signals can be digitized using an analog-to-digital converter (ADC), which generates a digital output as a stream of binary numbers. These numbers represent the electrical voltage input to the device at each sampling instant. </para> <section> <title>Discrete Signals</title> <para> When converting a continuous analog signal to a digital signal, the analog signal must be converted to a signal format that computers can analyze and perform complex calculations on. Discrete Signals are easily stored and transmitted over digital networks and have the ability to be discrete in magnitude, time, or both. </para> <para> Discrete-in-time values only exist at certain points in time. For example, if a sample of discrete-in-time data is taken at a point in time where there is no data, the result is zero. </para> <mediaobject id="udspas-Discrete-Signals-Discrete_in_Time"> <imageobject> <imagedata align="center" fileref="images/mms-DiscreteSignals-dia-Discrete_In_Time.png" format="PNG" width="405"/> </imageobject> </mediaobject> <para> Discrete-In-Magnitude values exist across a time range, however, the value of the datum in each time range consists of one constant result, rather than a variable set of results. </para> <mediaobject id="udspas-Discrete-Signals-Discrete_in_Magnitude"> <imageobject> <imagedata align="center" fileref="images/mms-DiscreteSignals-dia-Discrete_In_Magnitude.png" format="PNG" width="405"/> </imageobject> </mediaobject> <para> By converting continuous analog signals to discrete signals, finer computer data analysis is possible, and the signal can be stored and tranmitted efficiently over digital networks. </para> </section> </section> <section> <title>Sampling, Quantization, and Packetization</title> <para> Sampling is the process of recording the values of a signal at given points in time. For ADCs, these points in time are equidistant, with the number of samples taken during one second dictating the called sample rate. It's important to understand that these samples are still analogue values. The mathematic description of the ideal sampling is the multiplication of the signal with a sequence of direct pulses. </para> <para> Quantization is the process of representing the value of an analog signal by a fixed number of bits. The value of the analog signal is compared to a set of pre-defined levels. Each level is represented by a unique binary number, and the binary number that corresponds to the level closest to the analog signal value is chosen to represent that sample. </para> <para> Sampling and quantization prepare digitized media for future processing or streaming. However, streaming and processing over individual samples is not effective for high volumes of data transferred via a network. The risk of data-loss is much higher when a large portion of data is transferred in a block. Networked media should be transmitted using media packets that carry several samples, thereby reducing the risk of data loss through the transmission process. This process is referred to as packetization. </para> </section> <section> <title>Transfer Protocols</title> <para> The Real-time Streaming Protocol (RTSP), Real-time Transport Protocol (RTP) and the Real-time Transport Control Protocol (RTCP) were specifically designed to stream media over networks. The latter two are built on top of UDP. </para> <section> <title>Real-time Transport Protocol</title> <para> RTP provides end-to-end network transport functions suitable for applications transmitting real-time data, such as audio, video or simulation data, over multicast or unicast network services. RTP does not address resource reservation and does not guarantee quality-of-service for real-time services. The data transport is augmented by the Real-time Control Protocol (RTCP) to allow monitoring of the data delivery in a manner scalable to large multicast networks, and to provide minimal control and identification functionality. RTP and RTCP are designed to be independent of the underlying transport and network layers. </para> <para> A RTP packet consists of a RTP header, followed by the data to send. In the RTP specification, this data is referred to as the payload. The header is transmitted in network byte order, just like the IP header. The Figure 5 shows the RTP header format. </para> <mediaobject id="udspas-RealTimeTransportProtocol-dia-RTP_Header"> <imageobject> <imagedata align="center" fileref="images/mms-RealTimeTransportProtocol-dia-RTP_Header.png" format="PNG" scalefit="1" width="405"/> </imageobject> </mediaobject> <table frame="all" id="udspas-RTP_Header_Format"> <title>Supported RTP Formats</title> <tgroup align="left" cols="2" colsep="1" rowsep="1"> <colspec colname="col1" colnum="1"/> <colspec colname="col2" colnum="2"/> <thead> <row> <entry> Header Component </entry> <entry> Description </entry> </row> </thead> <tbody> <row> <entry> V (Version) </entry> <entry> Contains the version number of the RTP protocol. For example, the current version number is <literal>2</literal>. This part of the header consumes 2 bits of the RTP packet. </entry> </row> <row> <entry> P (Padding) </entry> <entry> Contains padding bytes, which are excluded from the payload data count. The last padding byte contains the number of padding bytes present in the packet. Padding may be required for certain encryption algorithms that need the payload to be aligned on a multi-byte boundary. </entry> </row> <row> <entry> X (Extension) </entry> <entry> Specifies whether the header contains an Extension Header. </entry> </row> <row> <entry> CC (CSRC Count) </entry> <entry> Specifies how many contributing sources are specified in the header. </entry> </row> <row> <entry> M (Marker) </entry> <entry> Contains arbitrary data that can be interpreted by an application. The RTP specification does not limit the information type contained in this component of the header. For example, the Marker component might specify that media data is contained within the packet. </entry> </row> <row> <entry> PT (Payload Type) </entry> <entry> Specifies the type of data the packet contains, which determines how an application receiving the packet interprets the payload. </entry> </row> <row> <entry> Sequence Number </entry> <entry> Contains a unique numerical value, that can be used by applications to place received packets in the correct order. Video streams rely on the sequence number to order the packets for individual video frames received by an application. The starting number for a packet stream is randomized for security reasons. </entry> </row> <row> <entry> Time Stamp </entry> <entry> Contains the synchronization information for a stream of packets. The value specifies when the first byte of the payload was sampled. The starting number for the Time Stamp is also randomized for security reasons. For audio, the timestamp is typically incremented with the amount of samples in the packet so the receiving application can play the audio data at exactly the right time. For video, the timestamp is typically incremented per image. One image of a video will generally be sent in several packets, therefore the pieces of data will have the same Time Stamp, but use a different Sequence Number. </entry> </row> <row> <entry> SSRC ID </entry> <entry> Contains the packet Synchronization Source (SSRC) identifier of the sender. The information contained in this component of the header is used to correctly order multiple RTP streams contained in a packet. This scenario often occurs when an application sends both video and audio RTP streams in one packet. So the receiving application can correctly order and synchronize the data, the identifier is chosen randomly. This reduces the chance of a packet in both streams having the same identifier. </entry> </row> <row> <entry> CSRC ID </entry> <entry> Contains one (or more) Contributing Source (CSRC) identifiers for each RTP stream present in the packet. To assist audio streams re-assembly, the SSRC IDs can be appended to this packet component. The SSRC ID of the packet then becomes the source identifier for the forwarded packet. </entry> </row> <row> <entry> Extension Header </entry> <entry> Contains arbitrary information, specified by the application. The RTP defines the extension mechanism only. The extensions contained within the Extension Header are controlled by the application. </entry> </row> </tbody> </tgroup> </table> <note> <para> RTP headers do not contain a payload length field. The protocol relies on the underlying protocol to determine the end of the payload. For example, in the TCP/IP architecture, RTP is used on top of UDP, which does contain length information. Using this, an application can determine the size of the whole RTP packet and after its header has been processed, the application automatically knows the amount of data in its payload section. </para> </note> </section> <section> <title>Real-time Transport Control Protocol</title> <para> The RTP is accompanied by a control protocol, the Real-time Transport Control Protocol (RTCP). Each participant of a RTP session periodically sends RTCP packets to all other participants in the session for the following reasons: </para> <itemizedlist> <listitem> <para> To provide feedback on the quality of data distribution. The information can be used by the application to perform flow and congestion control functions, and be used for diagnostic purposes. </para> </listitem> <listitem> <para> To distribute identifiers that are used to group different streams together (for example, audio and video). Such a mechanism is necessary since RTP itself does not provide this information. </para> </listitem> <listitem> <para> To observe the number of participants. The RTP data cannot be used to determine the number of participants because participants may not be sending packets, only receiving them. For example, students listening to an on-line lecture. </para> </listitem> <listitem> <para> To distribute information about a participant. For example, information used to identify students in the lecturer's conferencing user-interface. </para> </listitem> </itemizedlist> <para> There are several types of RTCP packets that provide this functionality. </para> <itemizedlist> <listitem> <para> Sender </para> </listitem> <listitem> <para> Receiver </para> </listitem> <listitem> <para> Source Description </para> </listitem> <listitem> <para> Application-specific Data </para> </listitem> </itemizedlist> <para> Sender reports (SR) are used by active senders to distribute transmission and reception statistics. If a participant is not an active sender, reception statistics are still transmitted by sending receiver reports (RR). </para> <para> Descriptive participant information is transmitted in the form of Source Description (SDES) items. SDES items give general information about a participant, such as their name and e-mail. However, it also includes a canonical name (CNAME) string, which identifies the sender of the RTP packets. Unlike the SSRC identifier, the SDES item stays constant for a given participant, is independent of the current session, and is normally unique for each participant. Thanks to this identifier it is possible to group different streams coming from the same source. </para> <para> There is a packet type that allows application-specific data (APP) to be transmitted with RTP data. When a participant is about to leave the session, a goodbye (BYE) packet is transmitted. </para> <para> The transmission statistics which an active sender distributes, include both the number of bytes sent and the number of packets sent. The statistics also include two timestamps: a Network Time Protocol (NTP) timestamp, which gives the time when this report was created, and a RTP timestamp, which describes the same time, but in the same units and with the same random offset of the timestamps in the RTP packets. </para> <para> This is particularly useful when several RTP packet streams have to be associated with each other. For example, if both video and audio signals are distributed, there has to be synchronization between these two media types on playback, called inter-media synchronization. Since their RTP timestamps have no relation whatsoever, there has to be some other way to do this. By giving the relation between each timestamp format and the NTP time, the receiving application can do the necessary calculations to synchronize the streams. </para> <para> A participant to a RTP session distributes reception statistics about each sender in the session. For a specific sender, a reception report includes the following information: </para> <itemizedlist> <listitem> <para> Fraction of lost packets since the last report. An increase of this value can be used as an indication of congestion. </para> </listitem> <listitem> <para> Total amount of lost packets since the start of the session. </para> </listitem> <listitem> <para> Amount of inter-arrival jitter, measured in timestamp units. When the jitter increases, this is also a possible indication of congestion. </para> </listitem> <listitem> <para> Information used by the sender to measure the round-trip propagation time to this receiver. The round-trip propagation time is the time it takes for a packet to travel to this receiver and back. </para> </listitem> </itemizedlist> <para> Because the RTCP packets are sent periodically by each participant to all destinations, the packet broadcast interval should be reduced as much as possible. The RTCP packet interval is calculated from the number of participants and the amount of bandwidth the RTCP packets may occupy. To stagger the broadcast interval of RTCP packets to participants, the packet interval value is multiplied by a random number. </para> </section> <section> <title>Jitter</title> <para> The term Jitter refers to processing delays that occur at each endpoint, and are generally caused by packet processing by operating systems, codecs, and networks. Jitter affects the quality of the audio and video stream when it is decoded by the receiving application. </para> <para> End-to-end delay is caused by the processing delay at each endpoint, and may be caused in part by IP packets travelling through different network paths from the source to the destination. The time it takes a router to process a packet depends on its congestion situation, and this may also vary during the session. </para> <para> Although a large overall delay can cause loss of interactivity, jitter may also cause loss of intelligibility. Though Jitter cannot be totally removed, the effects can be reduced by using a Jitter Buffer at the receiving end. The diagram below shows effect with media buffer and without media buffer </para> <mediaobject id="mms-Jitter-dia-No_Jitter_Buffer.png"> <imageobject> <imagedata align="center" fileref="images/mms-Jitter-dia-No_Jitter_Buffer.png" format="PNG" scalefit="1" width="405"/> </imageobject> </mediaobject> <para> Fig a. Shows that packet 3 is lost as it arrived late. Fig b uses Jitter buffer and hence arrived packets are stored in jitter and media components reads from Jitter once its half full. This way even if Packet 3 arrives little late, its read by the components. </para> </section> </section> </appendix> <appendix lang="en-US"> <title>Revision History</title> <simpara> <revhistory> <revision> <revnumber>3.0</revnumber> <date>Thu Jun 11 2009</date> <author> <firstname>Jared</firstname> <surname>Morgan</surname> <email>[email protected]</email> </author> <revdescription> <simplelist> <member>Second release of the "parameterized" documentation.</member> </simplelist> </revdescription> </revision> <revision> <revnumber>2.0</revnumber> <date>Fri Mar 06 2009</date> <author> <firstname>Douglas</firstname> <surname>Silas</surname> <email>[email protected]</email> </author> <revdescription> <simplelist> <member>First release of the "parameterized", and much-improved JBCP documentation.</member> </simplelist> </revdescription> </revision> </revhistory> </simpara> </appendix> <!-- <index /> --> </book>
© 2015 - 2025 Weber Informatics LLC | Privacy Policy