<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<!--

  	

  Copyright  2006 Sun Microsystems, Inc. All rights reserved.

-->
<HTML>
<HEAD>
  <TITLE>package</TITLE>
  <!-- Changed  11-Jun-2002 -->
</HEAD>
<BODY>
  The UI API provides a set of features for implementation of user
  interfaces for MIDP applications.  <H2>User Interface</H2>
  
  <P>
    The main criteria for the MIDP have been drafted with mobile
    information devices in mind (i.e., mobile phones and pagers).
    These devices differ from desktop systems in many ways, especially
    how the user interacts with them. The following UI-related
    requirements are important when designing the user interface
    API:</P>
  <UL>
    <LI>
      The devices and applications should be useful to users who are
      not necessarily experts in using computers. </LI>
    <LI>
      The devices and applications should be useful in situations
      where the user cannot pay full attention to the application.
      For example, many phone-type devices will be operated with one
      hand.</LI>
    <LI>
      The form factors and UI concepts of the device differ between
      devices, especially from desktop systems. For example, the
      display sizes are smaller, and the input devices do not always
      include pointing devices.</LI>
    <LI>
      The applications run on MIDs should have UIs that are compatible
      to the native applications so that the user finds them easy to
      use.</LI>
  </UL>
  <P>
    Given the capabilities of devices that will implement the MIDP and
    the above requirements, the MIDPEG decided not to simply subset
    the existing Java UI, which is the Abstract Windowing Toolkit
    (AWT).  Reasons for this decision include:</P>
  <UL>
    <LI>
      Although AWT was designed for desktop computers and optimized to
      these devices, it also suffers from assumptions based on this
      heritage.</LI>
    <LI>
      When a user interacts with AWT, event objects are created
      dynamically. These objects are short-lived and exist only until
      each associated event is processed by the system. At this point,
      the event object becomes garbage and must be reclaimed by the
      system's garbage collector. The limited CPU and memory
      subsystems of a MID typically cannot handle this behavior.</LI>
    <LI>
      AWT has a rich but desktop-based feature set. This feature set
      includes support for features not found on MIDs.  For example,
      AWT has extensive support for window management (e.g.,
      overlapping windows, window resize, etc.). MIDs have small
      displays which are not large enough to display multiple
      overlapping windows.  The limited display size also makes
      resizing a window impractical. As such, the windowing and layout
      manager support within AWT is not required for MIDs.</LI>
    <LI>
      AWT assumes certain user interaction models. The component set
      of AWT was designed to work with a pointer device (e.g., a mouse
      or pen input). As mentioned earlier, this assumption is valid
      for only a small subset of MIDs since many of these devices have
      only a keypad for user input.</LI>
  </UL>
  
  
  <H3>Structure of the MIDP UI API</H3>
  <P>
    The MIDP UI is logically composed of two APIs: the high-level and the
    low-level.</P>
  <P>
    The high-level API is designed for business applications whose client
    parts run on MIDs. For these applications, portability across devices
    is important. To achieve this portability, the high-level API employs a
    high level of abstraction and provides very little control over look
    and feel. This abstraction is further manifested in the following
    ways:</P>
  <UL>
    <LI>
      The actual drawing to the MID's display is performed by the
      implementation. Applications do not define the visual appearance
      (e.g., shape, color, font, etc.) of the components.</LI>
    <LI>
      Navigation, scrolling, and other primitive interaction is
      encapsulated by the implementation, and the application is not
      aware of these interactions.</LI>
    <LI>
      Applications cannot access concrete input devices like specific
      individual keys. </LI>
  </UL>
  <P>
    In other words, when using the high-level API, it is assumed that
    the underlying implementation will do the necessary adaptation to
    the device's hardware and native UI style.  The classes that
    provide the high-level API are the subclasses of
    {@link javax.microedition.lcdui.Screen}.</P>

  <P>
    The low-level API, on the other hand, provides very little
    abstraction.  This API is designed for applications that need
    precise placement and control of graphic elements, as well as
    access to low-level input events. Some applications also need to
    access special, device-specific features. A typical example of
    such an application would be a game.</P>
  <P>
    Using the low-level API, an application can:</P>
  <UL>
    <LI>
      Have full control of what is drawn on the display.</LI>
    <LI>
      Listen for primitive events like key presses and releases.</LI>
    <LI>
      Access concrete keys and other input devices.</LI>
  </UL>
  <P>
    The classes that provide the low-level API are
    {@link javax.microedition.lcdui.Canvas} and
    {@link javax.microedition.lcdui.Graphics}.</P>
  <P>
    Applications that program to the low-level API are not guaranteed
    to be portable, since the low-level API provides the means to
    access details that are specific to a particular device. If the
    application does not use these features, it will be portable.  It
    is recommended that applications use only the platform-independent
    part of the low-level API whenever possible. This means that the
    applications should not directly assume the existence of any keys
    other than those defined in the <CODE>Canvas</CODE> class, and
    they should not depend on a specific screen size. Rather, the
    application game-key event mapping mechanism should be used
    instead of concrete keys, and the application should inquire about
    the size of the display and adjust itself accordingly. </P>
  <H4>
    Class Hierarchy</H4>
  <P>
    The central abstraction of the MIDP's UI is a
    <code>Displayable</code> object, which encapsulates
    device-specific graphics rendering with user input.  Only one
    <code>Displayable</code> may be visible at a time, and and the
    user can see and interact with only contents of that
    <code>Displayable</code>.</P>
  <P>
    The <code>Screen</code> class is a subclass of
    <code>Displayable</code> that takes care of all user interaction
    with high-level user interface component.  The <code>Screen</code>
    subclasses handle rendering, interaction, traversal, and
    scrolling, with only higher-level events being passed on to the
    application.</P>
  <P>
    The rationale behind this design is based on the different display
    and input solutions found in MIDP devices. These differences imply
    that the component layout, scrolling, and focus traversal will be
    implemented differently on different devices. If an application
    were required to be aware of these issues, portability would be
    compromised. Simple screenfuls also organize the user interface
    into manageable pieces, resulting in user interfaces that are easy
    to use and learn. </P>
  <P>
    There are three categories of <code>Displayable</code> objects: </P>
  <UL>
    <LI>
      Screens that encapsulate a
      complex user interface
      component (e.g., classes
      <CODE>List</CODE> or <CODE>TextBox</CODE>).
      The structure of these screens is predefined, and the application
      cannot add other components to these screens. </LI>
    <LI>
      Generic screens (instances of the <CODE>Form</CODE> class) that
      can contain <CODE>Item</CODE> objects to represent user
      interface components.  The application can populate
      <CODE>Form</CODE> objects with an arbitrary number of text,
      image, and other components; however, it is recommended that
      <CODE>Form</CODE> objects be kept simple and that they should be
      used to contain only a few, closely-related user interface
      components.
    </LI>
    <LI>
      Screens that are used in context of the low-level API
      (i.e., subclasses of class <CODE>Canvas</CODE>).</LI>
  </UL>
  <P>
    Each <code>Displayable</code> can have
    a title, a <CODE>Ticker</CODE> and a set of 
    <CODE>Commands</CODE> attached to it. </P>
  <P>
    The class <CODE>Display</CODE> acts as the display manager that is
    instantiated for each active <code>MIDlet</code> and provides
    methods to retrieve information about the device's display
    capabilities. A <CODE>Displayable</CODE> is made visible by
    calling the <CODE>setCurrent()</CODE> method of
    <CODE>Display</CODE>.  When a <CODE>Displayable</CODE> is made
    current, it replaces the previous <CODE>Displayable</CODE>.  </P>
  
  <H4>
    Class Overview</H4>
  <P>
    It is anticipated that most applications will utilize screens with
    predefined structures like
    <CODE>List</CODE>
    , <CODE>TextBox</CODE>
    , and <CODE>Alert</CODE>
    . These classes are used in the following ways:</P>
  <UL>
    <LI>
      <CODE>List</CODE>
      is used when the user should select from a predefined set of
      choices.</LI>
    <LI>
      <CODE>TextBox</CODE>
      is used when asking textual input.</LI>
    <LI>
      <CODE>Alert</CODE>
      is used to display temporary messages containing text and images.
    </LI>
  </UL>
  <P>
    A special class <CODE>Form</CODE> is defined for cases where
    screens with a predefined structure are not sufficient. For
    example, an application may have two <CODE>TextFields</CODE>, or a
    <CODE>TextField</CODE> and a simple <CODE>ChoiceGroup</CODE>
    . Although this class (<CODE>Form</CODE> ) allows creation of
    arbitrary combinations of components, developers should keep the
    limited display size in mind and create only simple
    <CODE>Forms</CODE> .</P>
  <P>
    <CODE>Form</CODE> is designed to contain a small number of closely
    related UI elements.  These elements are the subclasses of
    <CODE>Item</CODE>: <CODE>ImageItem</CODE>,
    <CODE>StringItem</CODE>, <CODE>TextField</CODE>,
    <CODE>ChoiceGroup</CODE>, <CODE>Gauge</CODE>, and
    <CODE>CustomItem</CODE>. The classes <CODE>ImageItem</CODE> and
    <CODE>StringItem</CODE> are convenience classes that make certain
    operations with <CODE>Form</CODE> and <CODE>Alert</CODE>
    easier. By subclassing <CODE>CustomItem</CODE> application
    developers can introduce <CODE>Items</CODE> with a new visual
    representation and interactive elements. If the components do not
    all fit on the screen, the implementation may either make the form
    scrollable or implement some components so that they can either
    popup in a new screen or expand when the user edits the
    element.</P>
  
  
  <H4>
    Interplay with Application Manager</H4>
  <P>
    The user interface, like any other resource in the API, is to be
    controlled according to the principle of MIDP application
    management.  The UI expects the following conditions from the
    application management software: </P>
  <UL>
    <LI>
      <CODE>getDisplay()</CODE> is callable starting from
      <CODE>MIDlet</CODE>'s constructor until
      <CODE>destroyApp()</CODE> has returned. </LI>
    <LI>
      The <code>Display</code> object is the same until
      <CODE>destroyApp()</CODE> is called. </LI>
    <LI>
      The <code>Displayable</code> object set by
      <CODE>setCurrent()</CODE> is not changed by the application
      manager. </LI>
  </UL>
  <P>
    The application manager assumes that the application behaves as
    follows with respect to the <code>MIDlet</code> events: </P>
  <UL>
    <LI>
      <CODE>startApp</CODE>
      - The application may call <CODE>setCurrent()</CODE>
      for the first screen. The application manager makes
      <code>Displayable</code> really visible when <CODE>
      startApp()</CODE> returns. Note that <CODE>startApp()</CODE>
      can be called several times if <CODE>pauseApp()</CODE> is called
      in between. This means that initialization should not take
      place, and the application should not accidentally switch to
      another screen with <CODE>setCurrent()</CODE> . </LI>
    <LI>
      <CODE>pauseApp</CODE>
      - The application should release as many threads as
      possible. Also, if starting with another screen when the
      application is re-activated, the new screen should be set with
      <CODE>setCurrent()</CODE> .</LI>
    <LI>
      <CODE>destroyApp</CODE>
      - The application may delete created objects.</LI>
  </UL>
  
  
  <A NAME="events"></A>
  <H3>Event Handling</H3>
  <P>
    User interaction causes events, and the implementation notifies
    the application of the events by making corresponding
    callbacks. There are four kinds of UI callbacks:</P>
  <UL>
    <LI>
      Abstract commands that are part of the high-level API</LI>
    <LI>
      Low-level events that represent single key presses and releases
      (and pointer events, if a pointer is available)</LI>
    <LI>
      Calls to the <CODE>paint()</CODE> method of a
      <CODE>Canvas</CODE> class</LI>
    <LI>
      Calls to a <CODE>Runnable</CODE> object's <CODE>run()</CODE>
      method requested by a call to <CODE>callSerially()</CODE> of
      class <CODE>Display</CODE>
    </LI>
  </UL>
  <P>
    All UI callbacks are serialized, so they will never occur in
    parallel. That is, the implementation will
    never call an callback before a prior call to <em>any</em>
    other callback has returned.  This property enables applications
    to be assured that processing of a previous user event will have
    completed before the next event is delivered.  If multiple UI
    callbacks are pending, the next is called as soon as possible after
    the previous UI callback returns.  The implementation also
    guarantees that the call to <CODE>run()</CODE> requested by a call
    to <CODE>callSerially()</CODE> is made after any pending repaint
    requests have been satisfied.</P>
  <P>
    There is one exception to the callback serialization rule, which occurs
    when the {@link javax.microedition.lcdui.Canvas#serviceRepaints
    Canvas.serviceRepaints} method is called.  This method causes
    the the <code>Canvas.paint</code> method to be called and waits
    for it to complete.  This occurs even if the caller of
    <code>serviceRepaints</code> is itself within an active callback.
    There is further discussion of this issue 
    <A HREF="#concurrency">below</A>.</P>
  <P>
    The following callbacks are all serialized with respect to each other:
    </P>
  <UL>
    <li> {@link javax.microedition.lcdui.Canvas#hideNotify
                Canvas.hideNotify} </li>
    <li> {@link javax.microedition.lcdui.Canvas#keyPressed
                Canvas.keyPressed} </li>
    <li> {@link javax.microedition.lcdui.Canvas#keyRepeated
                Canvas.keyRepeated} </li>
    <li> {@link javax.microedition.lcdui.Canvas#keyReleased
                Canvas.keyReleased} </li>
    <li> {@link javax.microedition.lcdui.Canvas#paint
                Canvas.paint} </li>
    <li> {@link javax.microedition.lcdui.Canvas#pointerDragged
                Canvas.pointerDragged} </li>
    <li> {@link javax.microedition.lcdui.Canvas#pointerPressed
                Canvas.pointerPressed} </li>
    <li> {@link javax.microedition.lcdui.Canvas#pointerReleased
                Canvas.pointerReleased} </li>
    <li> {@link javax.microedition.lcdui.Canvas#showNotify
                Canvas.showNotify} </li>
    <li> {@link javax.microedition.lcdui.Canvas#sizeChanged
                Canvas.sizeChanged} </li>
    <li> {@link javax.microedition.lcdui.CommandListener#commandAction
                CommandListener.commandAction} </li>
    <li> {@link javax.microedition.lcdui.CustomItem#getMinContentHeight
                CustomItem.getMinContentHeight} </li>
    <li> {@link javax.microedition.lcdui.CustomItem#getMinContentWidth
                CustomItem.getMinContentWidth} </li>
    <li> {@link javax.microedition.lcdui.CustomItem#getPrefContentHeight
                CustomItem.getPrefContentHeight} </li>
    <li> {@link javax.microedition.lcdui.CustomItem#getPrefContentWidth
                CustomItem.getPrefContentWidth} </li>
    <li> {@link javax.microedition.lcdui.CustomItem#hideNotify
                CustomItem.hideNotify} </li>
    <li> {@link javax.microedition.lcdui.CustomItem#keyPressed
                CustomItem.keyPressed} </li>
    <li> {@link javax.microedition.lcdui.CustomItem#keyRepeated
                CustomItem.keyRepeated} </li>
    <li> {@link javax.microedition.lcdui.CustomItem#keyReleased
                CustomItem.keyReleased} </li>
    <li> {@link javax.microedition.lcdui.CustomItem#paint
                CustomItem.paint} </li>
    <li> {@link javax.microedition.lcdui.CustomItem#pointerDragged
                CustomItem.pointerDragged} </li>
    <li> {@link javax.microedition.lcdui.CustomItem#pointerPressed
                CustomItem.pointerPressed} </li>
    <li> {@link javax.microedition.lcdui.CustomItem#pointerReleased
                CustomItem.pointerReleased} </li>
    <li> {@link javax.microedition.lcdui.CustomItem#showNotify
                CustomItem.showNotify} </li>
    <li> {@link javax.microedition.lcdui.CustomItem#sizeChanged
                CustomItem.sizeChanged} </li>
    <li> {@link javax.microedition.lcdui.CustomItem#traverse
                CustomItem.traverse} </li>
    <li> {@link javax.microedition.lcdui.CustomItem#traverseOut
                CustomItem.traverseOut} </li>
    <li> {@link javax.microedition.lcdui.Displayable#sizeChanged
                Displayable.sizeChanged} </li>
    <li> {@link javax.microedition.lcdui.ItemCommandListener#commandAction
                ItemCommandListener.commandAction} </li>
    <li> {@link javax.microedition.lcdui.ItemStateListener#itemStateChanged
                ItemStateListener.itemStateChanged} </li>
    <li> <code>Runnable.run</code> resulting from a call to
         {@link javax.microedition.lcdui.Display#callSerially
                Display.callSerially} </li>
  </UL>

  <P>
    Note that {@link java.util.Timer Timer}
    events are not considered UI events.
    Timer callbacks may run concurrently with UI event
    callbacks, although {@link java.util.TimerTask TimerTask}
    callbacks scheduled on the same <code>Timer</code> are
    serialized with each other.
    Applications that use timers must guard their
    data structures against concurrent access from timer threads
    and UI event callbacks.  Alternatively, applications may have
    their timer callbacks use
    {@link javax.microedition.lcdui.Display#callSerially Display.callSerially}
    so that work triggered by timer events can be serialized with
    the UI event callbacks.</P>
  
  <H4>
    Abstract Commands</H4>
  <P>
    Since MIDP UI is highly abstract, it does not dictate any concrete
    user interaction technique like soft buttons or menus. Also,
    low-level user interactions such as traversal or scrolling are not
    visible to the application. MIDP applications define
    <CODE>Commands</CODE> , and the implementation may manifest these
    via either soft buttons, menus, or whatever mechanisms are
    appropriate for that device.</P>
  <P>
    <code>Commands</code> are installed to a <CODE>Displayable</CODE>
    (<CODE>Canvas</CODE> or <CODE> Screen</CODE> ) with a method
    <CODE>addCommand</CODE> of class <CODE>Displayable</CODE> .</P>
  <P>
    The native style of the device may assume that certain types of
    commands are placed on standard places. For example, the
    &quot;go-back&quot; operation may always be mapped to the right
    soft button. The <CODE>Command</CODE> class allows the application
    to communicate such a semantic meaning to the implementation so
    that these standard mappings can be effected.</P>
  <P>
    The implementation does not actually implement any of the
    semantics of the <CODE>Command</CODE>.  The attributes of a
    <CODE>Command</CODE> are used only for mapping it onto the user
    interface.  The actual semantics of a <CODE>Command</CODE> are
    always implemented by the application in a
    <CODE>CommandListener</CODE>.</P>
  <P>
    <CODE>Command</CODE> objects have attributes:</P>
  <UL>
    <LI>Label:
      Shown to the user as a hint. A single <CODE>Command</CODE> can
      have two versions of labels: short and long. The implementation
      decides whether the short or long version is appropriate for a
      given situation.  For example, an implementation can choose to
      use a short version of a given <CODE>Command</CODE> near a soft
      button and the long version of the <CODE>Command</CODE> in a
      menu.</LI>
    <LI>Type:
      The purpose of a command.  The implementation will use the
      command type for placing the command appropriately within the
      device's user interface. <code>Commands</code> with similar
      types may, for example, be found near each other in certain
      dedicated place in the user interface.  Often, devices will have
      policy for placement and presentation of certain operations.
      For example, a &quot;backward navigation&quot; command might be
      always placed on the right soft key on a particular device, but
      it might be placed on the left soft key on a different device.
      The <CODE>Command</CODE> class provides fixed set of command
      types that provide <code>MIDlet</code> the capability to tell
      the device implementation the intent of a <code>Command</code>.
      The application can use the <CODE>BACK</CODE> command type for
      commands that perform backward navigation.  On the devices
      mentioned above, this type information would be used to assign
      the command to the appropriate soft key.</LI>
    <LI>Priority:
      Defines the relative importance between <CODE>Commands</CODE> of
      the same type. A command with a lower priority value is more
      important than a command of the same type but with a higher
      priority value. If possible, a more important command is
      presented before, or is more easily accessible, than a less
      important one.</LI>
  </UL>

  <H4>
    Device-Provided Operations</H4>
  <P>

    In many high-level UI classes there are also some additional
    operations available in the user interface. The additional
    operations are not visible to applications, only to the end-user.
    The set of operations available depends totally on the user
    interface design of the specific device. For example, an operation
    that allows the user to change the mode for text input between
    alphabetic and numeric is needed in devices that have only an
    ITU-T keypad.  More complex input systems will require additional
    operations. Some of operations available are presented in the user
    interface in the same way the application-defined commands are.
    End-users need not understand which operations are provided by the
    application and which provided by the system. Not all operations
    are available in every implementation.  For example, a system that
    has a word-lookup-based text input scheme will generally provide
    additional operations within the <CODE>TextBox</CODE> class.  A
    system that lacks such an input scheme will also lack the
    corresponding operations.</P>

  <P>
    Some operations are available on all devices, but the way the
    operation is implemented may differ greatly from device to device.
    Examples of this kind of operation are: the mechanism used to
    navigate between <code>List</code> elements and <code>Form</code>
    items, the selection of <code>List</code> elements, moving an
    insertion position within a text editor, and so forth.  Some
    devices do not allow the direct editing of the value of an
    <CODE>Item</CODE>, but instead require the user to switch to an
    off-screen editor.  In such devices, there must be a dedicated
    selection operation that can be used to invoke the off-screen
    editor.  The selection of a <CODE>List</CODE> elements could be,
    for example, implemented with a dedicated &quot;Go&quot; or
    &quot;Select&quot; or some other similar key.  Some devices have
    no dedicated selection key and must select elements using some
    other means.</P>
  <P>
    On devices where the selection operation is performed using a
    dedicated select key, this key will often not have a label
    displayed for it.  It is appropriate for the implementation to use
    this key in situations where its meaning is obvious.  For example,
    if the user is presented with a set of mutually exclusive options,
    the selection key will obviously select one of those options.
    However, in a device that doesn't have a dedicated select key, it
    is likely that the selection operation will be performed using a
    soft key that requires a label.  The ability to set the
    select-command for a <CODE>List</CODE> of type
    <CODE>IMPLICIT</CODE> and the ability to set the default command
    for an <CODE>Item</CODE> are provided so that the application can
    set the label for this operation and so it can receive
    notification when this operation occurs. </P>
  
  <H4>
    High-Level API for Events</H4>
  <P>
    The handling of events in the high-level API is based on a
    listener model. <CODE>Screens</CODE> and <CODE>Canvases</CODE> may
    have listeners for commands.  An object willing to be a listener
    should implement an interface <CODE>CommandListener</CODE> that
    has one method:</P>
  <P>
  <TABLE BORDER="2">
    <TR>
      <TD ROWSPAN="1" COLSPAN="1">
        <pre><code>
    void commandAction(Command c, Displayable d);    </code></pre>
      </TD>
    </TR>
  </TABLE>
  </P>
  <P>
    The application gets these events if the <CODE>Screen</CODE> or
    <CODE>Canvas</CODE> has attached <CODE>Commands</CODE> and if
    there is a registered listener. A unicast-version of the listener
    model is adopted, so the <CODE>Screen</CODE> or
    <CODE>Canvas</CODE> can have one listener at a time.</P>
  <P>
    There is also a listener interface for state changes of the
    <CODE>Items</CODE> in a <CODE>Form</CODE> . The method</P>
  <P>
  <TABLE BORDER="2">
    <TR>
      <TD ROWSPAN="1" COLSPAN="1">
        <pre><code>
    void itemStateChanged(Item item);    </code></pre>
      </TD>
    </TR>
  </TABLE>
  </P>
  <P>
    defined in interface <CODE>ItemStateListener</CODE> is called when
    the value of an interactive <CODE>Gauge</CODE> ,
    <CODE>ChoiceGroup</CODE> , or <CODE>TextField</CODE> changes. It
    is not expected that the listener will be called after every
    change. However, if the value of an Item has been changed, the
    listener will be called for the change sometime before it is
    called for another item or before a command is delivered to the
    <code>Form's</code> <code>CommandListener</code>. It is suggested
    that the change listener is called at least after focus (or
    equivalent) is lost from field. The listener should only be called
    if the field's value has actually changed.</P>
  
  
  <H4>
    Low-Level API for Events</H4>
  <P>
    Low-level graphics and events have the following methods to handle
    low-level key events: </P>
  <P>
  <TABLE BORDER="2">
    <TR>
      <TD ROWSPAN="1" COLSPAN="1">
        <pre><code>
    public void keyPressed(int keyCode);    
    public void keyReleased(int keyCode);    
    public void keyRepeated(int keyCode);</code> </pre>
      </TD>
    </TR>
  </TABLE>
  </P>
  <P>
    The last call, <CODE>keyRepeated</CODE> , is not necessarily
    available in all devices. The applications can check the
    availability of repeat actions by calling the following method of
    the <CODE>Canvas</CODE> :</P>
  <P>
  <TABLE BORDER="2">
    <TR>
      <TD ROWSPAN="1" COLSPAN="1">
        <pre><code>
    public static boolean hasRepeatEvents();    </code> </pre>
      </TD>
    </TR>
  </TABLE>
  </P>
  <P>
    The API requires that there be standard key codes for the ITU-T keypad
    (0-9, *, #), but no keypad layout is required by the API. Although an
    implementation may provide additional keys, applications relying on
    these keys are not portable.</P>
  <P>
    In addition, the class <CODE>Canvas</CODE> has methods for
    handling abstract game events. An implementation maps all these
    key events to suitable keys on the device. For example, a device
    with four-way navigation and a select key in the middle could use
    those keys, but a simpler device may use certain keys on the
    numeric keypad (e.g., <code>2</code>, <code>4</code>,
    <code>5</code>, <code>6</code>, <code>8</code>). These game events
    allow development of portable applications that use the low-level
    events.  The API defines a set of abstract key-events:
    <code>UP</code>, <code>DOWN</code>, <code>LEFT</code>,
    <code>RIGHT</code>, <code>FIRE</code>, <code>GAME_A</code>,
    <code>GAME_B</code>, <code>GAME_C</code>, and
    <code>GAME_D</code>.</P>
  <P>
    An application can get the mapping of the key events to abstract
    key events by calling: </P>
  <P>
  <TABLE BORDER="2">
    <TR>
      <TD ROWSPAN="1" COLSPAN="1">
        <pre><code>
    public static int getGameAction(int keyCode);    </code></pre>
      </TD>
    </TR>
  </TABLE>
  </P>
  <P>
    If the logic of the application is based on the values returned by
    this method, the application is portable and run regardless of the
    keypad design.</P>
  <P>
    It is also possible to map an abstract event to a key with:</P>
  <P>
  <TABLE BORDER="2">
    <TR>
      <TD ROWSPAN="1" COLSPAN="1">
        <pre><code>
    public static int getKeyCode(int gameAction);    </code></pre>
      </TD>
    </TR>
  </TABLE>
  </P>
  <P> where <CODE>gameAction</CODE> is
    <code>UP</code>,<code>DOWN</code>, <code>LEFT</code>,
    <code>RIGHT</code>, <code>FIRE</code>, etc.  On some devices, more
    than one key is mapped to the same game action, in which case the
    <CODE>getKeyCode</CODE> method will return just one of them.
    Properly-written applications should map the key code to an
    abstract key event and make decisions based on the result.</P>

  <P> The mapping between keys and abstract events does not change
    during the execution of the game.</P> <P> The following is an
    example of how an application can use game actions to interpret
    keystrokes.</P>
  <P>
  <TABLE BORDER="2"> <TR> <TD ROWSPAN="1"
    COLSPAN="1">
          <PRE> <CODE>
    class MovingBlocksCanvas extends Canvas {
        public void keyPressed(int keyCode) {
            int action = getGameAction(keyCode);    
            switch (action) {
            case LEFT:
                moveBlockLeft();
                break;
            case RIGHT:
                ...
            }
        }
    }     </CODE> </PRE>
      </TD>
    </TR>
  </TABLE>
  </P>
  <P>
    The low-level API also has support for pointer events, but since
    the following input mechanisms may not be present in all devices,
    the following callback methods may never be called in some
    devices: </P>
  <P>
  <TABLE BORDER="2">
    <TR>
      <TD ROWSPAN="1" COLSPAN="1">
        <pre><CODE>
    public void pointerPressed(int x, int y);
    public void pointerReleased(int x, int y);
    public void pointerDragged(int x, int y);    </CODE></pre>
      </TD>
    </TR>
  </TABLE>
</P>
  <P>
    The application may check whether the pointer is available by calling
    the following methods of class <CODE>Canvas</CODE>
    : </P>
  <P>
  <TABLE BORDER="2">
    <TR>
      <TD ROWSPAN="1" COLSPAN="1">
        <pre><CODE>
    public static boolean hasPointerEvents();
    public static boolean hasPointerMotionEvents();    </CODE></pre>
      </TD>
    </TR>
  </TABLE>
  </P>
  
  <H4>
    Interplay of High-Level Commands and the Low-Level API</H4>

  <P>
    The class <CODE>Canvas</CODE> , which is used for low-level events
    and drawing, is a subclass of <CODE>Displayable</CODE> , and
    applications can attach <CODE>Commands</CODE> to it. This is
    useful for jumping to an options setup <CODE>Screen</CODE> in the
    middle of a game. Another example could be a map-based navigation
    application where keys are used for moving in the map but commands
    are used for higher-level actions.</P>
  <P>
    Some devices may not have the means to invoke commands when
    <CODE>Canvas</CODE> and the low-level event mechanism are in use.
    In that case, the implementation may provide a means to switch to
    a command mode and back.  This command mode might pop up a menu
    over the contents of the <CODE>Canvas</CODE>.  In this case, the
    <CODE>Canvas</CODE> methods <CODE>hideNotify()</CODE> and
    <CODE>showNotify()</CODE> will be called to indicate when the
    <CODE>Canvas</CODE> has been obscured and unobscured,
    respectively.</P>
  <P>
    The <CODE>Canvas</CODE> may have a title and a <CODE>Ticker</CODE>
    like the <CODE>Screen</CODE> objects.  However,
    <CODE>Canvas</CODE> also has a full-screen mode where the title
    and the <CODE>Ticker</CODE> are not displayed.  Setting this mode
    indicates that the application wishes for the <CODE>Canvas</CODE>
    to occupy as much of the physical display as is possible.  In this
    mode, the title may be reused by the implementation as the title
    for pop-up menus.  In normal (not full-screen) mode, the
    appearance of the <CODE>Canvas</CODE> should be similar to that of
    <CODE>Screen</CODE> classes, so that visual continuity is retained
    when the application switches between low-level
    <CODE>Canvas</CODE> objects and high-level <CODE>Screen</CODE>
    objects.</P>

  <H3>Graphics and Text in Low-Level API</H3>
  
  <H4>
    The Redrawing Scheme</H4>
  <P>
    Repainting is done automatically for all <CODE>Screens</CODE> ,
    but not for <CODE>Canvas</CODE>
    ; therefore, developers utilizing the low-level API must
    ; understand its
    repainting scheme. </P>
  <P>
    In the low-level API, repainting of <CODE>Canvas</CODE> is done
    asynchronously so that several repaint requests may be implemented
    within a single call as an optimization. This means that the
    application requests the repainting by calling the method
    <CODE>repaint()</CODE> of class <CODE>Canvas</CODE> . The actual
    drawing is done in the method <CODE>paint() </CODE>
    -- which is provided by the subclass <CODE>Canvas </CODE>
    -- and does not necessarily happen synchronously to
       <CODE>repaint()</CODE>
    . It may happen later, and several repaint requests may cause one
    single call to <CODE>paint()</CODE> . The application can flush
    the repaint requests by calling <CODE>serviceRepaints()</CODE>
    .</P>
  <P>
    As an example, assume that an application moves a box of width
    <CODE>
      wid</CODE>
    and height <CODE>ht</CODE> from coordinates (<CODE>x1,y1</CODE> )
    to coordinates (<CODE>x2,y2</CODE> ), where <CODE>x2&gt;x1</CODE>
    and <CODE>y2&gt;y1</CODE> : </P>
  <P>
  <TABLE BORDER="2">
    <TR>
      <TD ROWSPAN="1" COLSPAN="1">
        <pre><code>
    // move coordinates of box
    box.x = x2;
    box.y = y2;
    
    // ensure old region repainted (with background)    
    canvas.repaint(x1,y1, wid, ht);
    
    // make new region repainted
    canvas.repaint(x2,y2, wid, ht);
    
    // make everything really repainted
    canvas.serviceRepaints();</code> </pre>
      </TD>
    </TR>
  </TABLE>
  </P>
  <P>
    The last call causes the repaint thread to be scheduled. The
    repaint thread finds the two requests from the event queue and
    repaints the region that is a union of the repaint area: </P>
  <p>
    <TABLE BORDER="2">
    <TR>
      <TD ROWSPAN="1" COLSPAN="1">
        <pre><code>
    graphics.clipRect(x1,y1, (x2-x1+wid), (y2-y1+ht));      
    canvas.paint(graphics);      </code></pre>
      </TD>
    </TR>
  </TABLE></p>
  <P>
    In this imaginary part of an implementation, the call
    <CODE>
      canvas.paint()</CODE>
    causes the application-defined <CODE>paint()</CODE>
    method to be called.</P>
  
  
  <H4>
    Drawing Model</H4>
  <P>
    The primary drawing operation is pixel replacement, which is used
    for geometric rendering operations such as lines and rectangles.
    With offscreen images, support for full transparency is required,
    and support for partial transparency (alpha blending) is
    optional.</P>
  <P>
    A 24-bit color model is provided with 8 bits each for the red,
    green, and blue components of a color. Not all devices support
    24-bit color, so they will map colors requested by the application
    into colors available on the device. Facilities are provided in
    the <CODE>
      Display</CODE>
    class for obtaining device characteristics, such as whether color
    is available and how many distinct gray levels are available. This
    enables applications to adapt their behavior to a device without
    compromising device independence.</P>
  <P>
    Graphics may be rendered either directly to the display or to an
    off-screen image buffer. The destination of rendered graphics
    depends on the origin of the graphics object. A graphics object
    for rendering to the display is passed to the <CODE>Canvas</CODE>
    object's <CODE>paint()</CODE> method. This is the only way to
    obtain a graphics object whose destination is the
    display. Furthermore, applications may draw by using this graphics
    object only for the duration of the <CODE>paint()</CODE>
    method. </P>
  <P>
    A graphics object for rendering to an off-screen image buffer may
    be obtained by calling the <CODE>getGraphics()</CODE> method on
    the desired image. These graphics objects may be held indefinitely
    by the application, and requests may be issued on these graphics
    objects at any time.</P>
  <P>
    The <code>Graphics</code> class has a current color that is set
    with the <code>setColor()</code> method.  All geometric rendering,
    including lines, rectangles, and arcs, uses the current color.
    The pixel representing the current color replaces the destination
    pixel in these operations.  There is no background color.
    Painting of any background be performed explicitly by the
    application using the <code>setColor()</code> and rendering
    calls.  </P>
  <P>
    Support for full transparency is required, and support for partial
    transparency (alpha blending) is optional.  Transparency (both
    full and partial) exists only in off-screen images loaded from PNG
    files or from arrays of ARGB data.  Images created in such a
    fashion are <em>immutable</em> in that the application is
    precluded from making any changes to the pixel data contained
    within the image.  Rendering is defined in such a way that the
    destination of any rendering operation always consists entirely of
    fully opaque pixels. </P>
  
  <H4>
    Coordinate System</H4>
  <P>
    The origin <code>(0,0)</code> of the available drawing area and
    images is in the upper-left corner of the display. The numeric
    values of the x-coordinates monotonically increase from left to
    right, and the numeric values of the y-coordinates monotonically
    increase from top to bottom. Applications may assume that
    horizontal and vertical distances in the coordinate system
    represent equal distances on the actual device display. If the
    shape of the pixels of the device is significantly different from
    square, the implementation of the UI will do the required
    coordinate transformation. A facility is provided for translating
    the origin of the coordinate system. All coordinates are specified
    as integers.</P>
  <P>
    The coordinate system represents locations between pixels, not the
    pixels themselves. Therefore, the first pixel in the upper left
    corner of the display lies in the square bounded by coordinates
    <code>(0,0), (1,0), (0,1), (1,1)</code>.</P>
  <P>
    An application may inquire about the available drawing area by calling
    the following methods of <CODE>Canvas</CODE>
    : </P>
  <P>
  <TABLE BORDER="2">
    <TR>
      <TD ROWSPAN="1" COLSPAN="1">
        <pre><CODE>
    public static final int getWidth();
    public static final int getHeight();    </CODE></pre>
      </TD>
    </TR>
  </TABLE>
  </P>
  
  <H4>
    Font Support</H4>
  <P>
    An application may request one of the font attributes specified
    below.  However, the underlying implementation may use a subset of
    what is specified. So it is up to the implementation to return a
    font that most closely resembles the requested font.</P>
  <P>
    Each font in the system is implemented individually. A programmer
    will call the static <CODE>getFont()</CODE> method instead of
    instantiating new <CODE>Font</CODE> objects. This paradigm
    eliminates the garbage creation normally associated with the use
    of fonts.</P>
  <P>
    The <CODE>Font</CODE> class provides calls that access font
    metrics. The following attributes may be used to request a font
    (from the <CODE>
      Font</CODE>
    class): </P>
  <UL>
    <LI>
      Size: <code>SMALL</code>, <code>MEDIUM</code>,
      <code>LARGE</code>.</LI>
    <LI>
      Face: <code>PROPORTIONAL</code>, <code>MONOSPACE</code>,
      <code>SYSTEM</code>.</LI>
    <LI>
      Style: <code>PLAIN</code>, <code>BOLD</code>, <code>ITALIC</code>,
      <code>UNDERLINED</code>.</LI>
  </UL>
  
  <A NAME="concurrency"></A>
  <H3>Concurrency</H3>
  <P>
    The UI API has been designed to be thread-safe. The methods may be
    called from callbacks, <CODE>TimerTasks</CODE>, or other threads created
    by the application. Also, the implementation generally does not hold any
    locks on objects visible to the application. This means that the
    applications' threads can synchronize with themselves and with the event
    callbacks by locking any object according to a synchronization policy
    defined by the application.  One exception to this rule occurs with the
    {@link javax.microedition.lcdui.Canvas#serviceRepaints
    Canvas.serviceRepaints} method.  This method calls and awaits
    completion of the <code>paint</code> method.  Strictly speaking,
    <code>serviceRepaints</code> might not call <code>paint</code>
    directly, but instead it might cause another thread to call
    <code>paint</code>.  In either case, <code>serviceRepaints</code>
    blocks until <code>paint</code> has returned.  This is a significant
    point because of the following case.  Suppose the caller of
    <code>serviceRepaints</code> holds a lock that is also needed by the
    <code>paint</code> method.  Since <code>paint</code> might be called
    from another thread, that thread will block trying to acquire the lock.
    However, this lock is held by the caller of <code>serviceRepaints</code>,
    which is blocked waiting for <code>paint</code> to return.  The result
    is deadlock.  In order to avoid deadlock, the caller of
    <code>serviceRepaints</code> <em>must not</em> hold any locks
    needed by the <code>paint</code> method.</P>
  <P>
    The UI API includes also a mechanism similar to other UI toolkits
    for serializing actions with the event stream.  The method
    {@link javax.microedition.lcdui.Display#callSerially Display.callSerially}
    requests that the <code>run</code> method of a <code>Runnable</code>
    object be called, serialized with the event stream.  Code that uses
    <CODE>serviceRepaints()</CODE> can usually be rewritten to use
    <CODE>callSerially()</CODE>.  The following code illustrates
    this technique:
  </P>
  <P>
  <TABLE BORDER="2">
    <TR>
      <TD ROWSPAN="1" COLSPAN="1">
        <pre><code>
    class MyCanvas extends Canvas {    
        void doStuff() {
            // &lt;code fragment 1&gt;    
            serviceRepaints();
            // &lt;code fragment 2&gt;    
        }
    }    </code> </pre>
      </TD>
    </TR>
  </TABLE>
  </P>
  <P>
    The following code is an alternative way of implementing the same
    functionality:</P>
  <P>
  <TABLE BORDER="2">
    <TR>
      <TD ROWSPAN="1" COLSPAN="1">
        <pre><code>
    class MyClass extends Canvas 
       implements Runnable {            
        void doStuff() {
            // &lt;code fragment 1&gt;
            callSerially(this);
        }

        // called only after all pending repaints served    
        public void run() {
            // &lt;code fragment 2&gt;;
        }
    }    </code> </pre>
      </TD>
    </TR>
  </TABLE>
  </P>
  
  <H3>Implementation Notes</H3>
  <P>
    The implementation of a <code>List</code> or
    <code>ChoiceGroup</code> may include keyboard shortcuts for
    focusing and selecting the choice elements, but the use of these
    shortcuts is not visible to the application program.</P>
  <P>
    In some implementations the UI components -- <code>Screens</code>
    and <code>Items</code> -- will be based on native components. It
    is up to the implementation to free the used resources when the
    Java objects are not needed anymore. One possible implementation
    scenario is a hook in the garbage collector of KVM.</P>
  
@since MIDP 1.0
</BODY>
</HTML>
