First Livrable of the Work Package 2

Transcription

First Livrable of the Work Package 2
MOANO Project
First Livrable
of the
Work Package 2
December 2011
Table of contents
I. Aims of the document
II. Notes management: an example of mobile GIS application
III. Mobile GIS
III.A Definitions
III.B Type of operations
III.C Structuration of data
III.D Services
III.D.1 Location-Based Services
III.D.2 Map-based mobile services
III.E Multimodality and mobile GIS
IV. Multimodal interactions
IV.A Task model annotated with Modality Interaction (CTT annotated)
IV.B UMAR (User Modalities Artefact Representation)
IV.B SMUIML (Synchronized Multimodal User Interaction Modeling Language)
V. Separation of concerns
VI. References
I. Aims of the document
As described in Part 4.2 (Description of the tasks) of MOANO proposal and illustrated on the figure below,
efforts of WP2 concentrate on design time.
The future MOANO design environment will allow to model a specific type of mobile applications: Territory
Exploitation and Discovery Oriented Pervasive Applications (TEDOPA). These ones are characterised as follow:
1.Manipulated data are spatio-temporal.
2.End-user uses mobile applications through different types of sensors (tactile, gps, accelerometer,
camera...)
3.Each concern of an application has to be described/modelled by the most concerned persons. For
example, end-user for interaction modalities, GIS*-designer for spatio-temporal data... This is not a
characteristic of TEDOPA but just a good practice in Software Engineering as underlined by the
Standish Group [CHAOS 2009].
This document aims to provide principles and concepts for each characteristic. In corresponding scientific
areas, this means:
1.Mobile GIS
2.Multimodal interactions
3.Separation of concerns
Each point are studied through the Model Driven Engineering (MDE) viewpoint as MDE is central in MOANO
project. Illustrations are based on an example of application which is currently developed for our partner
ENLM (Espace Naturel Lille Métropole): Notes Management. This application is described in the part 2. Part 3
presents good practices in mobile GIS. It provides a definition of mobile GIS applications and a list of actual
operations which are generally executed on mobile devices. This part further describes type of data and
services used in such applications and finally show some existing examples. Modelling multimodal interactions
is the subject of part 4. It presents three recent graphical formalisms dedicated to such human computer
interactions and concludes in some consideration about their limitations. Part 5 proposed an overview of three
types of modeling mechanisms to separate design concerns.
II. Notes management: an example of mobile GIS
application
“Notes management” is a mobile application that helps the MOSAIC park gardeners to take notes (for instance
observations or accidents) and send them to their GIS. They can also consult notes, reorder the notes list,
adds a photo to a note or delete a note.
While acting in the gardens, if the gardener Marc for example wants to consult the notes taken during the
day, he turns on his smartphone and the “Notes management” application already installed. Then, he reorders
notes by selecting “by date” in the reordering menu and “today” in the date menu presented on the right. As
shown in figure 1 (1), the application shows the notes list in reply and offers Marc the possibility to consult or
edit them. Thus, he chooses the “Branches to pick up” note for example which shows him that there are
branches to pick up in the “Pierre Auvente” garden (figure 1 (2)).
If after a few days Marc decides adding a note. He starts the application, presses the menu key of his
smartphone and selects the “add a note” command as shown in figure 1 (3). The application displays so the
“create a note” interface which includes a text field to write the note's description, the current date and
location detected by GPS, the “take a photo” button to take the note's attached photo and the “I have
finished” button to go back to the notes list. Figure 1 (4) shows this interface with pre-filled date and location
fields (the system date and Marc's location). Marc can then adds a photo (figure 1 (5)) and a description using
the smartphone virtual keyboard.
Figure 1. Gardener’s notes application (basic workflow)
III. Mobile GIS
III.A Definitions
GIS concept: A geographic information system (GIS) is a technological tool for comprehending geography
and making intelligent decisions.
Making decisions based on geography is basic to human thinking. Where shall we go, what will it be
like, and what shall we do when we get there are applied to the simple event of going to the store or to the
major event of launching a bathysphere into the ocean’s depths. By understanding geography and people's
relationship to location, we can make informed decisions about the way we live on our planet GIS organizes
geographic data so that a person reading a map can select data necessary for a specifi c project or task. A
thematic map has a table of contents that allows the reader to add layers of information to a basemap of realworld locations. A good GIS program is able to process geographic data from a variety of sources and
integrate it into a map project. Many countries have an abundance of geographic data for analysis ;
governments and local administrations often make GIS datasets publicly available.
GIS maps are interactive. On the computer screen, map users can scan a GIS map in any direction,
zoom in or out, and change the nature of the information contained in the map. They can choose whether to
see the roads, how many roads to see, and how roads should be depicted. Then they can select what other
items they wish to view alongside these roads such as storm drains, gas lines, rare plants, or hospitals. Some
GIS programs are designed to perform sophisticated calculations, GIS applications can also be embedded into
common activities such as verifying an address/or obtaining driving directions.
Mobile GIS concept: Mobile GIS is the expansion of GIS technology from the office into the field. A mobile
GIS enables field-based personnel to capture, store, update, manipulate, analyze, and display geographic
information. Mobile GIS integrates one or more of the following technologies: Mobile devices Global
positioning system (GPS) Wireless communications for Internet GIS access
Traditionally, the processes of field data collection and editing have been time consuming and error prone.
Geographic data has traveled into the field in the form of paper maps. Field edits were performed using
sketches and notes on paper maps and forms. Once back in the office, these field edits were deciphered and
manually entered into the GIS database. The result has been that GIS data has often not been as up-to-date
or accurate as it could have been. Current developments in mobile GIS have enabled GIS to be taken into the
field as digital maps on compact, mobile computers, providing field access to enterprise geographic
information. This enables organizations to add real-time information to their database and applications,
speeding up analysis, display, and decision making by using up-to-date, more accurate spatial data.
III.B Type of operations
There are different types of operations that may be done when using a mobile GIS. We list them and illustrate
them with our example.
●Field Mapping: Create, edit, and use GIS maps in the field.
When a gardener look at a map to see where he/she has taken notes during a particular day.
●Asset Inventories: Create and maintain an inventory of asset locations and attribute information.
When a gardener creates a new note for something he/she wants to remember. Attribute information
are time, location, note details (textual) and hypothetical photo.
●Asset Maintenance: Update asset location and condition and schedule maintenance.
If a gardener returns for a place where he/she has previously taken a note, and realises that this note
was wrong, he/she modifies it.
●Inspections: Maintain digital records and locations of field assets for legal code compliance and ticketing.
One of gardeners is responsible for security (especially for people with disabilities). A part of his notes
will concern respect of safety standards.
●Incident Reporting: Document the location and circumstances of incidents and events for further action
or reporting.
An important part of notes will concern incidents like a diseased plant, another damaged by an animal,
a broken trap...
●GIS Analysis and Decision Making: Perform measuring, buffering, geoprocessing, and other GIS analysis
while in the field.
When a gardener has to plant seeds, he/she generally tries to remember what treatment he/she has
associated previous years (and if they worked well or not). As there are several hundred of plants at
Mosaic, notes about previous treatments will be useful to decide what to do. Search by location, time
and keywords will be even more.
III.C Services
Geoservices are web services which provide, manipulate, analyse, communicate and visualise any kind of
geographic information [Meng et al. 2003]. Mobile Geoservices are a subcategory of geoservices as they are
characterised by using mobile devices and mobile networks [Dransch 2005]. At run-time, such services are
driven by the user’s position, changes of this position and the activities that user alter. These services have to
adapt the presented information to the continuously-changed context and activity.
[Ye Lei et al. 2006] describe classic architecture of mobile GIS (see figure 2 ) and present four types of mobile
geoservices.
●Location-based services (LBS): This is the main category of services which may contain following
ones. LBS refers to services which are accessible with mobile devices through wireless communication
and which use the ability to make use of the location of the terminals [Virrantaus et al. 2001].
●Map services or Map-based mobile services: They bring descriptive information and also procedural
knowledge through (mobile) maps [Meng et al. 2005].
●Route guidance services: This type of services tracks property and services based on locating other
people [Kaasinen 2003].
●Tracking services or geotracking: This type of services monitors the exact place where someone
(people) or something (objects) is [Bauer et al. 2005].
We only treat the first two types of mobile geoservices. First, the following description of LBS and Map
services (including standards) gives enough material to get a correct overview of geoservices concerns.
Second the route guidance services are already massively treated on iOS, Android, Windows Phone... In
MOANO Project, we will use existing applications or APIs to provide route guidance in our MOANO applications
and design environment. Finally, geotracking is not a main concern in MOANO project.
Figure 2 - Mobile GIS Information Infrastructure
III.D.1 Location-Based Services
[Reichenbacher 2004] lists typical actions related to everyday “geo-activities” and their associated services.
His classification actions/services is presented through the following table. We just have added a column “On
notes management example” to illustrate actions with our example.
Action
Questions
Orientation &
localisation
locating
Where am I?
Where is {person|
object}?
Navigation
navigating through
space, planning a
route
How do I get to
{place name|
address|xy}?
Search
Where is the
searching for people {nearest|most
and objects
relevant} & {person|
object} ?
Objective &
Operations
On notes
Services &
Support
management
Parameters
example
Obj: Localise people In what garden am I? S: Deliver position of Orientation in space
and objects
Where has been taken persons and objects
Ops:
the note that I am
P:
- positioning
currently reading?
- coordinate
- geocoding
- object
- geodecoding
- address
-place name
Obj: Find the way to Guide me to this
a destination
incident indicated in
Ops:
this note.
- positioning
This is really
- geocoding
interesting because
- geodecoding
there is no signs in
- routing
Mosaic park.
S: Deliver routes and Finding the way
navigation
through space
instructions
P:
- starting point
- destination point
- waypoints as
locations
Obj: Searching for
people and objects
meeting the search
criteria
Ops:
- positioning
- geocoding
- calculating distance
and area finding
relationships
S: discover available Finding relevant
services; find
objects; finding
persons/objects
people
P:
- location
- area/radius
- object/category
I am carrying
pesticides: give me
notes that contains
“infected plants” and
date from the last
week.
Identification
identifying and
recognising persons
or objects
{what|who|how
much} is {here|
there}?
Where are the traps is
this garden ? That is
to say, What notes
have been taken here
and contails “trap
installed”?
S: Deliver (semantic) Information about
information about
real world objects of
persons/objects
the usage situation.
P:
- object
Event check
What happens {here| Obj: Knowing what
checking for events; there}?
happens; knowing
determining the state
the state of objects
of objects
Ops : none
Give me notes
containing “incident”
that have been taken
here ?
S: Deliver object
state information and
event information
P:
- time
- location
- object
Finding relevant
events; information
about the state of
real world objects in
the usage situation
Action
On notes
management
example
Service and
Parameters
Support
Questions
Obj: Identifiy people
and objects; quantify
objects
Ops :
- directory
- selection
- thematic/spatial
- search
Objective &
Operations
Elementary spatial user action [Reichenbacher 2004]
III.D.2 Map-based mobile services
[Meng et al. 2005] differentiate three main categories of screen maps:
1.View-only maps. Such screen displays only a simple map with no particular associated informations.
Zooming or scrolling are examples of operations provided by view-only maps.
2.Analytic maps. In such maps, the primary map has been visually transformed to present hyperdimensional information. Transformations may be clipping, highlighting, hiding and overlaying. As they
are interactive, analytic maps also proposed operations like searching or querying in order to enhance
the visual acuity of interesting data items.
3.Explorative maps. If analytic maps shows geo-information, explorative maps mainly present geoknowledge. Usage contexte is related to decision-making. On our gardener example, an explorative
map could be a map where represented plants would change according to fertilizing choices that
gardener would make on its screen.
Map-based mobile services (MBMS) may deliver maps as bitmap image or a vector one. If at beginning MBMS
was delivering image due to limited capacity of mobile devices, this has recently changed: Android Google
maps uses vector maps since December 2010. GpsMid [GpsMid 2011], a open-source project, also proposes
such maps management. The main advantages to vector map are to ease overlaying areas with enhanced
highlighting, coloring and hiding, and to lighten network communications.
[Lehto et al. 2005] propose a five-layer system architecture to support MBM services. Their architecture (see
figure 2) allows to understand computation concerns about such services. First, this separation in several
layers allows to benefit of a context where several servers may propose same type of services, like in the
project ESDIN [Letho 2011] which is illustrated in figure 2. Each layer consumes data from layers below.
A Data Service Layer provides original spatial data. Several providers may exist and also use different
response formats. To overcome this format heterogeneity, the Data Integration Layer intends to
homogenise data in a single format and in a single geospatial coordinate system. Even in the same format,
data from different sources may sometimes have to be merged (with operations like edge matching). This is
the goal of the Data Processing Layer. Finally the Portal Service Layer uses all previous layers to provide
a map (raster or vector image) with all information requested. Response map generally contains several visual
layers. The last layer is the mobile device client.
The Five-layer open service stack [Lehto et al. 2005]
ESDIN project: MBM services distributed over several countries [Letho 2011]
Since 1994, the Open Geospatial Consortium [OGC 2011] standardises geospatial content and services.
The Web Feature Service standard [WFS 2011] is a good candidate to structure the Data Service Layer. A
WFS-compliant server has to propose five type of operations/services:
1.GetCapabilities: what kind of operations and geospatial entities/objects does the server proposes. On
our notes management example, this may refer to operations like filter notes by date, add/remove/
modify notes...
2.DescribeFeatureType: this service returns the structure of each type of entities that the server proposes.
This may be the features description of a note.
3.GetFeature: this is the main consultation service. It allows to get geospatial data by indicating what kind
of objects is requested, in which range and time period... Responses have to be in GML [GML 2011]
(Geography Markup Language). We use GetFeature to get notes list for a specific garden which are
dated from the last week.
4.LockFeature: this service allows to lock some objects while a transaction.
5.Transaction: this service refers to all edit operations. Create, update or delete a note.
Here is a example of request/response to illustrate WFS.
A user wants to get notes he has taken since the 21 December 2011 in the Pierre Auvent Garden. This is the
corresponding WFS request.
<?xml version="1.0" ?>
<wfs:GetFeature
service="WFS"
version="1.0.0"
xmlns:wfs="http://www.opengis.net/wfs"
xmlns:ogc="http://www.opengis.net/ogc"
xmlns:enlm="http://www.enlm/ns"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.opengis.net/wfs ../wfs/1.0.0/WFS-basic.xsd">
<wfs:Query typeName="enlm:NOTE">
<ogc:PropertyName>enlm:description</ogc:PropertyName>
<ogc:PropertyName>enlm:time</ogc:PropertyName>
<ogc:PropertyName>enlm:location</ogc:PropertyName>
<ogc:Filter>
<ogc:And>
<ogc:PropertyIsGreaterThan>
<ogc:PropertyName>enlm:Note/enlm:time</ogc:PropertyName>
<ogc:Function name="dateParse">
<ogc:Literal>yyyy-MM-dd HH:mm:ss</ogc:Literal>
<ogc:Literal>2011-12-20 23:59:59</ogc:Literal>
</ogc:Function>
</ogc:PropertyIsGreaterThan>
<ogc:within>
<ogc:PropertyName>enlm:NOTE/enlm:location</ogc:PropertyName>
<gml:Point>
<gml:name>Pierre Auvent Garden</gml:name>
<gml:pos>... </gml:pos>
</ogc:within>
</ogc:And>
</ogc:Filter>
</wfs:Query>
</wfs:GetFeature>
Here is the response in GML format.
<?xml version = '1.0' encoding = 'UTF-8'?>
<wfs:FeatureCollection xsi:schemaLocation="http://www.example.com/myns ..."
xmlns:wfs="http://www.opengis.net/wfs"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<gml:boundedBy xmlns:gml="http://www.opengis.net/gml">
<gml:Box srsName="SDO:8307">
<gml:coordinates>3.0,3.0 6.0,5.0</gml:coordinates>
</gml:Box>
</gml:boundedBy>
<gml:featureMember xmlns:gml="http://www.opengis.net/gml">
<enlm:NOTE fid="129" xmlns:enlm="http://www.enlm/ns">
<enlm:description>A flower has been destroyed by an animal</enlm:description>
<enlm:time>2011-12-26 10:07:33</enlm:time>
<enlm:location>Pierre Auvente Garden</enlm:location>
</enlm:NOTE>
<enlm:NOTE>...</enlm:NOTE>
<enlm:NOTE>...</enlm:NOTE>
</gml:featureMember>
</wfs:FeatureCollection>
The Web Map Service [WMS 2011] is a good candidate to structure the Portal Service Layer as it is
dedicated to maps. As WFS, WMS has two operations/services to know capabilities (GetCapabilities) and
features of maps (GetFeaturesInfo) that the server provides.
The GetMap service is the most relevant here. A lot of parameters are proposed in GetMap requests to design
in details what map is requested:
●basemap: to indicate a named map already listed in the server.
●bbox: to delimitate the area concerned (lower-left and upper-right coordinates of the bounding box)
●bgcolor: background color for the map
●datasource: to specifiy if it concerns another source than WMS
●exceptions: exception messages supported
●format: image/gif, image/jpeg, image/png, image/png8, and image/svg+xml
●height: height of the map in pixels
●layers: named layers proposed by the server like “countries”, “roads”. This may be “notes” of course, but
also “gardens” or “tree areas” in our example.
●srs: the spatial reference system (SDO:srid-value, EPSG:4326 or none
●transparent: may be true for png map
●width: width of the maps in pixels
And also dynamic_styles, legend-request, mvthemes and version.
This is an example of WMS response of our previous example:
<Title>Notes in Pierre Auvent Garden since 12/21/2011</Title>
<Layer>
<Name>Notes</Name>
<Title>Notes examples</Title>
<SRS>SDO:8307</SRS>
<LatLonBoundingBox>-180,-90,180,90</LatLonBoundingBox>
...
<Layer>
<Name>Gardens</Name>
<Title>Gardens</Title>
<SRS>SDO:8307</SRS>
<LatLonBoundingBox>-180,-90,180,90</LatLonBoundingBox>
<Layer>
...
</Layer>
...
</Title>
The Open Layout Service (OpenLS) [OpenLS 2011] is also a good candidate to the Portal Service Layer.
In fact, OpenLS is dedicated to the overall Location-Based Services as it defines 4 service categories: Location
Utility Service, Presentation Service, Route Service, Directory Service. The Presentation Service is the one
related to Lehto Portal Service Layer. OpenLS-PS may considered as an improved version of WMS as it may
manage map and additional information.
In the request, we still specify the desired map with the same parameters as WMS and also the layers.
<XLS
xmlns=http://www.opengis.net/xls
xmlns:gml=http://www.opengis.net/gml
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance
xsi:schemaLocation="http://www.opengis.net/xls …"
version="1.1">
<RequestHeader/>
<Request
methodName="NotesMapRequest"
requestID="210"
version="1.1">
<PortrayMapRequest>
<Output Specification
BGcolor="#a6cae0"
content="URL"
...
</Output>
of the map
<Basemap filter="Include"> Wanted layers
<Layer name="mvdemo.demo_map.THEME_DEMO_COUNTIES"/>
<Layer name="mvdemo.demo_map.THEME_DEMO_HIGHWAYS"/>
</Basemap>
But we may also specify points of interest (POI) and personal layers that will display these POIs. In our
example, the mobile client may define POIs related to the last three notes that the gardener has taken.
<Overlay zorder="1">
<POI
ID="123"
description="note n"
POIName="Note n">
<gml:Point srsName="4326">
<gml:pos>-122.4083257 37.788208</gml:pos>
</gml:Point>
</POI>
<POI note n-1 /POI>
<POI note n-2 /POI>
</Overlay>
</PortrayMapRequest>
</Request>
</XLS>
The response indicates a map and the overlaying information:
<xls:XLS
xmlns:xls=http://www.opengis.net/xls
xmlns:gml=http://www.opengis.net/gml
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance
xsi:schemaLocation="http://www.opengis.net/xls …"
version="1.1">
<xls:ResponseHeader/>
<xls:Response numberOfResponses="1" requestID="123" version="1.1">
<xls:PortrayMapResponse>
<xls:Map>
<xls:Content format="GIF_URL" height="600" width="800">
<xls:URL>http://www.enlm.fr/openls/presentation/generatedmaps/298792879387.png</xls:URL>
</xls:Content>
<xls:BBoxContext srsName="4326">
<gml:pos>-122.86037685607968 37.07744235794024</gml:pos>
<gml:pos>-121.66262314392031 37.97575764205976</gml:pos>
</xls:BBoxContext>
</xls:Map>
</xls:PortrayMapResponse>
</xls:Response>
</xls:XLS>
[OSGEO 2011] shows which current software solutions support these standards (only WFS,WMS are
concerned).
III.E Multimodality and mobile GIS
Multimodality exists for desktop computers since 1980 [Bolt 1980], but it is relatively recent on mobile
devices. It appeared first on PDAs and with the first generation of mobile phones (mostly after 2003). While
the first multimodality experiments was rather DIY, devices have evolved rapidly to support first steps of
multimodality. For example, the CoMPASS system [Doyle et al. 2008a] provides input with keyboard/pen/
stylus, voice (commands and dictation), pen gestures, combination of speech and gesture (e.g. to calculate a
distance between two points), and combination of gesture and handwriting (e.g. to enter textual information).
For output, in addition to usual textual and graphical displays, the devices offer sometimes voice synthesis.
Some multimodal mobile applications proposed other combinations of input modalities [Doyle et al. 2008b],
such as speech and pen [Oviatt 2003], speech and lip movements [Benoit et al. 2000], and vision-based
modalities including gaze [Qvarfordt et al., 2005], head and body movement [Nickel 2003] and facial features
[Constantini et al., 2005].
Current devices (such as smartphones) offer new input and output multimodal interactions (e.g. GPS,
accelerometer, compass, augmented reality, vibrations...) that did not offer other phones and PDAs before
2007, i.e. when the iPhone appeared and gave users a new way of interactions and questioned the previous
results.
Among the different experiments in the last decade, we note the following good practices for multimodality:
●The tactile modality is mainly used for selection and data entry;
●The voice is used to make data entry more rapidly or when the user’s hands are busy;
●The application should offer users the ability to switch between modalities, e.g. data entry with voice and
then with a virtual keyboard, or whether the user has to make corrections. Experiments have shown
that s/he uses the same modality once or twice to correct her/his error, and then with another
modality if the error is still unresolved.
●The application must provide feedback to users during data entry to show how it understands the user’s
request and whether this request succeed or not;
●The combination of voice and tactile offers better results than the voice alone (mainly because of
recognition problems) [Doyle et al. 2008a] [Jokinen 2008];
●Multimodality offers better disambiguation and is widely preferred by users (in the usages and for its
efficacy).
However, we note two major limitations:
●Multimodality is still rarely used in GIS, and always in a very basic manner (voice for command or
dictations, simple gestures to zoom, browse, or make a selection on a map...) [Baus et al., 2005]. For
instance, we do not find an application that proposes users to enter notes and browse them as we
suggest in the MOANO project, i.e. by combining gestures with accelerometer, voice command, tactile
input…
●Smartphones challenge all the results presented above. Currently, we do not have enough experience on
the use of smartphones for GIS coupled or not with multimodality because interaction techniques
evolve too quickly. Therefore, users do not have time to get used to a new technique that a new one
arises.
III.F mobile GIS Applications
There have been several projects based in the outdoors, such as Ambient Wood [Rogers et al., 2004];
Savannah [Facer et al., 2004]; Frequency 15501 (http://freq1550.waag.org); Butterfly Watching [Chen et al.,
2005]; CAERUS [Naismith et al., 2005]; Environmental Detectives [Klopfer & Squire, 2008] and Riot! 1831
[Reid et al., 2004]. These projects have been inspired by biological or historical aspects of the environment
and presented an engaging user experience for tourists and students alike.
3.1. The CAGE System
In MOBIlearn, [Lonsdale, et al., 2005] implemented this interactional model as part of a software system
called CAGE to support learning through context. CAGE is a movement-based guide, the user’s movement
within and between locations, as well as changes to physical posture, orientation and gaze can all provide
means of interaction and opportunities for adapting the content and delivery of educational material. In a
museum or gallery, the layout of rooms and exhibits is designed as a structured information space, so any
physical movement around the rooms is also a traversal between concepts. This can be used to advantage in
a mobile guide. Consider a person standing in front of a painting in a gallery. A context-aware guide could
adapt its content and presentation to take account of the person‟s route to that location (“you seem to be
interested in pre-Raphaelite paintings”), their current location (“the portrait here is also by Rossetti”), their
orientation and gaze (“if you turn further to your right you can see a similar painting by Burne-Jones”), and
the time they have been at the current location (“now look more closely at the folds of the dress”). Similar
concepts can be applied in outdoor settings, which although not designed deliberately for educational
purposes, can have structure, coherence and continuity that can be exploited by a movement-based guide.
For example, a rural landscape can reveal contrasts in agricultural use, or changing rock formations along a
pathway.
With CAGE, users carried a handheld device that tracked their location indoors to within 10cm accuracy, using
ultrasonic positioning. The device stored the users’ learning profiles, the history of their movements, and their
current location and their activity, such as moving or standing. From this information it first filtered
information that would not be relevant to the person’s context (such as high resolution images on the small
screen) and then offered relevant support for learning. In trials at an art gallery, as the visitor walked past a
painting that had not been seen before CAGE gave a short audio description of the work of art. Then, if the
person stopped, it offered a longer spoken introduction based on the learner’s profile. If the user waited
longer, it offered an interactive presentation to explore aspects of the painting. The CAGE system was
successful in provoking discussion among groups of visitors, encouraging them to appreciate paintings in more
detail. But this was at the cost of a complex model of context. Fundamental research is needed on whether
explicit modelling and representation of context can offer clear benefits to learning and, if so, to design new
ways to model and integrate the human and technical aspects of context awareness.
CAGE en action 1 : CAGE guide, with ultrasonic transmitters on the radiator ledges, and the receiver attached
to the handheld device. The audio could be delivered through a speaker or earpiece
The Context Awareness Subsystem provided two levels of awareness:
• The context state: elements from the Learning and Setting at one particular point in time, space or goal
sequence
• The context substate: Elements from the learner and settting that are relevant to the current focus of
learning and desired level of context awareness.
From a technological viewpoint, the CAGE system was implemented as a client-server application.
The CAGE architecture
3.2. The CAERUS system
CAERUS is a Microsoft Funded Research Project. It is a complete context aware software system for visitors to
outdoor tourist sites and educational centres. Consisting of a handheld delivery system and a desktop
administration system, CAERUS provides tools to add new maps, define regions of interest, add themed
multimedia tours, and deliver this information to Pocket PC devices with GPS capability:
• Information is location-based and is delivered automatically when the learner enters a region of
interest.
• The handheld application's push-button interface allows learners to concentrate on their environment
and not on the mobile device.
• The easy to use desktop application allows administrators to easily design and administer multiple
educational scenarios.
CAERUS in action
This system has been used in different scenarios, in particular to efficiently provide time-sensitive information
on historic sites at a country, region or city level. This includes seasonal changes, restoration activities, context
driven by application or by use. The system offers seamless transition between indoor and outdoor locations,
it can also assess effectiveness of exhibits by monitoring visitor routes and time spent at each location.
The Desktop Administration interface
The handheld interface
3.3. AICHE examples
The AICHE model can be used for the analysis and classification of existing systems, in the design and
engineering process for contextual working and learning, and also in the instructional design for given
educational objectives.
Here are some examples of existing systems described in terms of AICHE components.
• Conference channels: At most scientific conferences today you can learn about comments and
annotations of other participants via blogging or micro-blogging services. Participants can post messages
to a shared commenting channel. Any participant can read these messages and the presenter can, for
example, pick them up for discussion. In AICHE terms the example uses a messaging channel,
aggregates micro blogging posts via hash tags, enriches users and channel with the tag information, and
synchronises the users social context and environment with the messaging channel, if framing is added
the system could automatically calculate the most prominent sessions. In an automatic configuration you
would join a parallel discussion channel for every presentation room you enter at a conference either
being displayed on your mobile or in a room projection.
• Synchronized TV discussions: several products nowadays already begin to combine available digital
channels based on metadata. As one example one can use a digital TV receiver to watch TV and to chat
with your buddies that watch the same program. In terms of AICHE this combines an information
delivery channel (TV Program) with an interactive channel (Chat) contextualized via the program
selected and the personal user information about buddies. The interactive channel could be routed to
your personal device while the output channel could be displayed on a public display.
• Ubiquitous coaching service: several existing services work with notifications to remind users of important
learning activities connected to real world activities. Such systems let the instructional designer define a
strategy for following up a seminar with real world activities the learner should do after the training in
the daily working life. The users receive requests to clean up their desktop or schedule their weekly
meetings. The output channel is contextualised to a time schedule of the day, i.e. the notification is
delivered always in specified time slots in which the activity typically takes place. The feedback channel
of the user is a simple reply message.
IV. Multimodal interactions
In order to investigate the modeling approaches for multimodal interactive systems, we have modeled the
multimodal version of the notes management application. As it is now, the application offers tactile/screen
display as input/output modalities. It was assumed so that, in addition to the tactile, the application offers also
the RFID, voice and gesture as input modalities. Thus, the user (gardener) can add a vocal description to a
note, making a gesture (here-gesture) to reorder the notes list by position and skip to a new note using an
associated RFID without the need to start the application.
In the following we present three different modeling languages (and their graphical formalisms) for
multimodal applications each with a different concern :
IV.A Task model annotated with Modality Interaction (CTT annotated)
In order to make interactive systems easier to use, task models are used to take into account the users
interaction description in the design process. The CTT (ConcurTaskTrees) [Paterno et al. 1997]introduced by
Paternò in 1997, is one of the most popular task model notation. It is based on hierarchical structure of tasks
represented as a tree, where tasks are categorized as user, application, interaction and abstract tasks. The
temporal relationships between tasks such as synchronization, choice, enabling, disabling and concurrency are
defined by a complete set of symbols. Originally, the CTT was not suitable for modeling multimodal
applications since neither the interaction modalities nor their possible cooperations was presented in the
models.
For this reason, it was enriched in [Clerckx et al. 2007] with interaction modalities and CARE properties
(Complementarity, Assignment, Redundancy and Equivalence) to allow the multimodal tasks description. The
four proprieties associated to the tasks were presented by their first letter and modalities by the letter "m"
coupled to a number and then presented in more detail beside as presented in the notes management model
shown in figure 3. At the first sight the model appears very complicated, but it describes clearly how
multimodal user's activities can be performed.
Figure 3 : CTT annotated of notes management
It starts by the global abstract task which represents the desire to use the application and then the choice
between “Using RFDI to add a vocal note” and “Starting the application” which chair the two principal
subtrees of the model. Each subtree presents the users tasks as well as system feedback with the associated
modality and CARE operation. The temporal relationships play also a major role to coordinate tasks and set
the user's preferences. This model can also be used as a guide in order to help end-users to perform their
tasks which increase the interaction robustness.
IV.B UMAR (User Modalities Artefact Representation)
UMAR [Le Bodic et al. 2005]is a descriptive model of simulated multimodal artefacts. It allows the description
of the multimodal interactive systems in order to simulate and analyze predictable user activities. It
categorizes the modalities as action (used to interact with the system), control (used to control the output
modalities) and output (the system feedback) modalities. The action modality is noted by the 7tuple<Cat,P,R,Co,Tr,Ti,Te>, where :
●Cat : the category of modality
●P : the physical device
●R : the representational system
●Co : the control modality attached
●Tr : the realization time (user's required time to perform a command)
●Ti : the interpretation time (system's required time to interpret the user's command)
●Te : the run time
Figure 4 : The notes managment UMAR model
While the output modality is noted by the 3-tuple<P,R,D>, where D defines the needed time to express the
output modality. The cooperations between these modalities are defined using the CARE properties, each with
a specific symbol (° for Complementarity, & for Redundancy and || for Equivalence). The all model's graphical
notation is based on hierarchical and concurrent states similar to the statecharts. Figure 4 shows the notes
management application modeled by UMAR and structured on two concurrent sub-diagrams. The first subdiagram characterizes the two interaction modes (tactile and voice) of the application, while the second
expresses the application states of use starting by the “Desire to start taking-notes application” state.
Transitions between states (such as the transition between the first state and “Consult notes list” state) are
annotated by action modality (M1) or the cooperation between two or more modalities (M6). These modular
description makes the model easy to read and accessible for all design process participants.
IV.B SMUIML (Synchronized Multimodal User Interaction Modeling
Language)
The SMUIML [Dumas 2010] is an XML-based language that describes the multimodal interface at different
abstraction levels. It was created in order to configure a proposed framework for the rapid creation of
multimodal interfaces.
As shown in figure 5, created using the SMUIML graphical editor, the language allows the description of
multimodal application as following:
Figure 5 : Notes management expressed in SMUIML
●The user-machine dialog is presented using a state machine, where states present the system feedback
such as the liste_note state for example.
●Arrows between states define the possible transitions or interactions that allow the state change such as
the transition between liste_note and add_note (there is also some transitions which keep the same
state).
●Transitions defines the input event that allows its triggering and the associated interaction modality. The
cooperation between modalities is defined by “seq_and”, “seq_or”, “par_and” and “par_or” each with
its own symbol. For example, the transition between liste_note and add_note is characterized by the
two events menu_pointed_event and addnote_pointed_event in seq_end cooperation and tactile
modality (which is note explicitly symbolized like the RFID for example).
●If a transition selection offers other results than the state change, the result is defined as an action (a
small icon of an orange arrow). For example, the transition between take_photo and note provides the
take_photo_geolocated action.
Some other sections of the SMUIML file are not clarified in the graphical version of the model but they are
defined in the XML file generated by the editor, such as the modalities recognizers for example.
The three presented formalisms describe explicitly and in easy to read way the interaction modalities of the
multimodal applications according to their own concern (modeling users task, simulation, configuration).
Nevertheless, they are far from being manipulated by end users since they are intended to computer scientists
and thus require low-level knowledge.
V. Separation of concerns
Les smartphones se répandent[CSCORE11] . Ces appareils communicants donnent accès à un nombre
important de services informatiques et sont généralement équipés de divers capteurs tels que GPS,
caméra ou microphone. Ils s'accompagnent de nouvelles possibilités et donc de nouveaux usages. Mais
l'émergence de bonnes pratiques autour de ces usages réclame des retours et du temps. Nous proposons
de trouver plus rapidement ces bonnes pratiques en donnant aux utilisateurs experts la possibilité de
définir et d'affiner eux-mêmes certains aspects de leurs applications mobiles. Pour que les utilisateurs
décrivent les différents aspects de leurs applications mobiles, il a fallu déterminer un support. Les
représentations simplifiées et graphiques qu'offrent les modèles nous ont semblé être des arguments
pertinents pour répondre à ce besoin. La génération automatique de la solution est une contribution
supplémentaire aux exigences d'accélération du processus de détermination des bonnes pratiques.
Mais comme dans toute activité, le partage d'expérience entre utilisateurs est important[ArPaWe03] . En
utilisant les modèles comme support, une partie de ce partage va se traduire par la réutilisation de
fragments de modèle d'un utilisateur chez un autre. De plus, il faudra veiller à proposer la définition des
préoccupations qui concernent les utilisateurs.
Nous présenterons le principe de séparation des préoccupations pour son rôle important dans le processus
de réutilisation [PARR 04] et dans la limitation/simplification des rapports entre modèles et utilisateurs. Nous
verrons ensuite une synthèse des mécanismes qui réalisent la réutilisation (et relativiseront leurs
compatibilité avec les formes de séparation de préoccupation vue précédemment).
La séparation des préoccupation est une réponse à des réalités de terrain/l'industrialisation [Dij74][FrIs94] .
Mais c'est d'abord une réponse au besoin naturel des humains de ne travailler que sur des contextes
limités [MILLER56]. En effet, Miller observe que l'esprit humain ne peut traiter qu'environ 7 éléments (concret
ou abstrait) simultanément. Cependant, sa capacité d'abstraction ne semble être limitée que par l'effort et
le temps qu'elle requière.
[Ghezzi]
considère la séparation des préoccupations comme un problème très général qui peut être appliqué
sous des formes variées (comme le découpage d'un projet en plusieurs phases temporelles). Il identifie
néanmoins 2 instances spécialisées qui aident à mieux appréhender la complexité des systèmes : la
modularisation et l'abstraction. La modularisation permet la décomposition d'un système en un ensemble
de parties dont les fonctionnalités et les responsabilités sont clairement définies [Par72]. L'abstraction est
une simplification réductrice et une conceptualisation généralisante[Cap08]. L'abstraction ne garde que les
aspects importants et ignore les détails pour faciliter la compréhension [ON02]. On peut associer ces deux
[CSCORE11][CSCORE11]
europe/
http://www.comscoredatamine.com/2011/02/smartphone-adoption-increased-across-the-u-s-and-
[ArPaWe03][ArPaWe03] Ardichvili, A., Page, V., & Wentling, T. (2003). Motivation and barriers to participation in virtual knowledgesharing communities of practice. Journal of Knowledge Management, 7(1), 64-77. doi:10.1108/13673270310463626
[PARR 04][PARR 04] Parr, T. J. (2004). Enforcing strict model-view separation in template engines. Proceedings of the 13th
conference on World Wide Web - WWW '04 (p. 224). New York, New York, USA: ACM Press. doi:10.1145/988672.988703
[Dij74][Dij74]
Dijkstra, E. W. (1974). On the role of scientific thought. Computing. Retrieved from http://www.citeulike.org/
group/1680/article/872524
[FrIs94][FrIs94] Frakes, W. B., & Isoda, S. (1994). Success factors of systematic reuse. IEEE Software, 11(5), 14-19. doi:
10.1109/52.311045
[MILLER56][MILLER56] Miller, G. A. (1956). The magical number seven, plus or minus two: some limits on our capacity for
processing information. Psychological Review, 63(2), 81-97. doi:10.1037/h0043158
[Ghezzi][Ghezzi] Ghezzi, C., Jazayeri, M., & Mandrioli, D. (2002). Fundamentals of Software Engineering. (F. Arbab & M. Sirjani,
Eds.)Engineering (Vol. 5961, pp. 431-438-438). Prentice Hall PTR. doi:10.1007/978-3-642-11623-0
[Par72][Par72] Parnas, D. L. (1972). On the criteria to be used in decomposing systems into modules. Communications of the
ACM, 15(12), 1053-1058. doi:10.1145/361598.361623
[Cap08][Cap08] Caplat, G. (2008). Modèles et métamodèles. (Lausanne : Presses polytechniques et universitaires romandes, Ed.).
[ON02][ON02] Overstreet, C. M., & Nance, R. E. (2002). Issues in enhancing model reuse. International Conference on Grand
Challenges for Modeling and Simulation. San Antonio, Texas, USA. Retrieved from http://www.thesimguy.com/GC/papers/
WMC02/G108_OVERSTREET.pdf
techniques pour en obtenir de nouvelles aux propriétés différentes. Par exemple, l'encapsulation est un
module dont les interfaces sont clairement définies mais où l'implémentation n'est pas exposée [Sn86] , elle
n'est accessible qu'à travers une abstraction : l'interface du module.
L'approche objet est connue et utilisée pour offrir des techniques de modularisation et d'abstraction
[Sn86]. Elle est exploitée dans les langages de programmations et dans les langages de modélisation
(MOF) [Po01]. Cependant, ces approches ne permettent pas toujours d'obtenir une bonne séparation des
préoccupations et en particulier, une bonne modularisation [Kic96]. [KLM+97] soulève certaines limitations de
l'approche objet ; les réponses à des préoccupations extra-fonctionnelles sont dispersées et enchevêtrées
à d'autres modules c'est à dire à d'autre préoccupations. On en déduit que les approches objet montrent
des faiblesses dans leurs capacités à modulariser les réponses à toutes les préoccupations. La
communauté objet semble s'être mobilisée face à cette difficulté sous le « titre » de séparation avancée
des préoccupations [ASoC02] (ASoC).
L'ASoC peut aider à répondre à nos besoins si on considère les différentes préoccupations des experts
(utilisateurs finaux, responsables de SIG, responsables d'équipes, …) comme autant de préoccupations
extra-fonctionnelles. Cette proposition rejoint les travaux sur la SoC multi-dimensionel [TOHS08] et
l'observation de la notion point de vue dans les approches aspects [Bar98] . Cette remodularisation de
l'ensemble des préoccupations (en points de vue, dimensions ou aspects) n'est pas étrangère à
l’ingénierie dirigée par les modèles [Jéz10]. La méta-modélisation permet de définir des langages spécialisés
qui peuvent décrire efficace le domaine d'une préoccupation c'est à dire d'un aspect. Les modèles offrent
donc des pistes prometteuses pour offrir aux différents experts un support de description de leurs
différentes préoccupations.
Cependant, si des pistes existent pour décomposer l'ensemble des préoccupations, il subsiste des
difficultés. La spécialisation pour un domaine préoccupation risque d'entrainer une séparation de plus en
plus prononcée des différents langages de modélisation. Comment composer ces modèles ? Au contraire,
certain domaines de préoccupations comme l'interaction, ont nécessairement des relations avec d'autre
domaines. Comment gérer les relations entre ces domaines ? Ce type d'approches est-elle compatible
avec l'activité d'échange entre experts ?
La SoC a des avantages dont une meilleure réutilisabilité [PARR 04]. Mais cette réutilisabilité n'est qu'un
potentiel s'il n'y a pas de mécanismes effectivement mis en place pour effectuer la réutilisation. On peut
distinguer deux étapes dans la réutilisation : l'exportation et l'importation. [Kruger] décompose ces deux
étapes avec la taxonomie suivante : sélection, abstraction, spécialisation et intégration. Les éléments
réutilisés qui constituent le « grain » est appelé artefact. Ses observations portent la réutilisabilité en
fonction de la nature (code source, générateurs d'applications, composants logiciels) de ce qui est désiré.
Nous nous intéresserons aux mécanismes qui peuvent participer la réutilisation et leur efficacité pour la
[Sn86][Sn86] Snyder, A. (1986). Encapsulation and inheritance in object-oriented programming languages. Conference proceedings
on Object-oriented programming systems, languages and applications - OOPLSA ’86 (pp. 38-45). New York, New York, USA:
ACM Press. doi:10.1145/28697.28702
[Po01][Po01] Poole, J. (2001). Model-driven architecture: Vision, standards and emerging technologies. Workshop on Metamodeling
and Adaptive, (April), 1-15. Retrieved from http://www.adaptiveobjectmodel.com/ECOOP2001/submissions/ModelDriven_Architecture.pdf
[Kic96][Kic96]
Kiczales, G. (1996). Beyond the black box: open implementation. IEEE Software, 13(1), 8, 10-11. doi:
10.1109/52.476280
[KLM+97][KLM+97] Kiczales, G., Lamping, J., Mendhekar, A., Maeda, C., Lopes, C. V., Loingtier, J.-M., & Irwin, J. (1997). Aspectoriented programming. Computer Science, 1241/1997, 220-242. doi:10.1007/BFb0053381
[ASoC02][ASoC02] Brichau, J., Glandrup, M., Clarke, S., & Bergmans, L. (2006). Object-Oriented Technology. (Á. Frohner, Ed.)
ECOOP 2001 workshop reader No15, 2323, 107-130. Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/3-540-47853-1
[TOHS08][TOHS08] Tarr, P., Ossher, H., Harrison, W., & Sutton, S. M. (1999). N degrees of separation: multi-dimensional separation
of concerns. Proceedings of the 21st international conference on Software engineering - ICSE ’99 (pp. 107-119). New York, New
York, USA: ACM Press. doi:10.1145/302405.302457
[Bar98][Bar98] Bardou, D. (1998). Roles,Subjects and Aspects: How Do They Relate? In S. Demeyer & J. Bosch (Eds.), ObjectOriented Technology: ECOOP’98 Workshop Reader (Vol. 1543, p. 63). Berlin, Heidelberg: Springer Berlin Heidelberg. doi:
10.1007/3-540-49255-0_124
[Jéz10][Jéz10] Jézéquel, J. M. (2010). Ingénierie Dirigée par les Modèles : du design-time au runtime. Génie Logiciel - Ingéniérie
dirigée par les modèles, 93. Retrieved from http://hal.archives-ouvertes.fr/inria-00504666/
[PARR 04][PARR 04] Parr, T. J. (2004). Enforcing strict model-view separation in template engines. Proceedings of the 13th
conference on World Wide Web - WWW '04 (p. 224). New York, New York, USA: ACM Press. doi:10.1145/988672.988703
réutilisation de modèles.
L'exportation est l'étape où l'artefact réutilisable est créé. Il faut pouvoir l'identifier, le séparer des autres
préoccupations (modulariser), le séparer de son contexte et rendre ce dernier redéfinissable. La sélection
consiste à délimiter l'artefact qui répond à la préoccupation que l'on cherche à résoudre et de stocker cet
artefact à l’extérieur de l'ensemble duquel il appartient. Cet artefact peut avoir des natures variés : code,
librairie, élément de modèle, etc. Une mauvaise modularisation peut rendre moins aisée et moins efficace
cette étape de sélection. La sélection vise donc à extraire la réponse à une préoccupation et donc à
séparer une préoccupation des autres. Si cette modularisation a déjà été réalisé, identifier et délimiter
cette réponse pourra être plus rapide. De plus, une délimitation plus simple nécessitera des mécanismes
d'exportation moins complexe et pourra donc être utilisé dans plus de situations. L'abstraction vise à
retirer sa spécialisation à l'artefact et à pouvoir la redéfinir plus tard. Le generics de Java ou
l'encapsulation des composants SCA sont des exemples d'abstraction qui facilite la réutilisation.
L'étape d'importation a pour but d'insérer dans un ensemble déjà existant, l'artefact précédemment
exporté. Une importation est réussie lorsque l'artefact est adapté au contexte de l'ensemble d'accueil et
inséré dans cet ensemble d'accueil. En reprenant la taxonomie de Krueger, on parlera de spécialisation et
d'intégration.
Merge
Les commandes Unix diff et patch sont une bonne illustration des mécanismes d'exportation (diff) et
d'importation (patch). Ces outils sont utilisés pour la fusion de code informatique produit par plusieurs
développeurs. L'exportation et l'importation ont pour artefact un ensemble de lignes de texte. L'ensemble
de départ et d'arrivé sont un ensemble de lignes de texte. Le mécanisme d'exportation sélectionne
l'artefact en relevant les différences entre deux ensembles : l'ensemble avant et après ajout de la réponse
à une préoccupation. Le mécanisme d'exportation relève aussi des informations supplémentaire utiles à
l'étape d'importation : la position des différences et un nombre arbitraire de lignes voisines à la
différence. Le mécanisme d'importation insère/supprime les différences aux positions indiquées par
l'artefact. La détermination de la position des modifications constituent la phase d'adaptation/
spécialisation au contexte. L'importation est idéale lorsque l'ensemble qui a servi de référence à
l'exportation et l'ensemble d'accueil sont identique. Ces commandes Unix souffrent d'un inconvénient :
lorsque l'ensemble qui a servi à l'exportation est différent de l'ensemble d'accueil alors il devient
nécessaire de faire intervenir des mécanismes capables de résoudre d'éventuels conflits et des
mécanismes capables de déterminer les points d'ancrage où viendra se placer l'artefact.
commande
Argument 1
Argument 2
Sortie
diff -U1
01
02
03
01
03
@@ -1,3 +1,2 @@
01
-02
03
01
03
patch
@@ -1,3 +1,2 @@
01
01
02
-02
03
03
patch
@@ -1,3 +1,2 @@
01
01
CONFLICT LINE
-02
03
03
patch
@@ -1,3 +1,2 @@
01
01
02
-02
04
03
Tableau : Exemple de l'utilisation des commandes diff et patch
hunk FAILED -saving rejects to
file f.rej
01
04
Hunk #1
succeeded at 1
with fuzz 1.
Il existe exemple des exemples d'exploitation du merge dans le domaine de la modélisation. D-
Praxis [PRAXIS] pour la fusion de travaux collaboratifs, l'extension de package [CLARKE02][CLARK02] pour la
réutilisation de modèles ou l'application d'aspects au niveau méta-modèle [KULK03] sont des exemples qui
font intervenir le merge.
Figure X : Extension de package avec la méthode d'extension de package
La figure X illustre l'extension de package basé sur les noms[Clark02]. L'artefact que l'on cherche à
appliquer permet à une application de prise de note de géo-positionner les notes. Les parties en rouge
représentent les éléments du modèle en conflit résolu en les considérant comme point d'ancrage.
L'artefact est un sous-ensemble d'éléments de modèle qui sont sélectionnés durant la phase de sélection.
Durant l'importation, une première phase propose de définir des règles de renommage pour adapter
l'artefact à l'ensemble d'accueil (phase d'adaptation).
On observe plusieurs inconvénients :
1. L'étape d'exportation présente des limites. Tout élément de l'artefact peut être renommé lors de
l'étape de spécialisation. À l'inverse, si des éléments de l'ensemble de départ et d'accueil portent
[PRAXIS][PRAXIS] Michaux, J., Blanc, X., Shapiro, M., & Sutra, P. (2011). A semantically rich approach for collaborative model
edition. Proceedings of the 2011 ACM Symposium on Applied Computing – SAC '11 (p. 1470). New York, New York, USA: ACM
Press. Doi:10.1145/1982185.1982500
[CLARKE02][CLARKE02] Clarke, S. (2002). Extending standard UML with model composition semantics. Science of Computer
Programming, 44(1), 71-100. doi:10.1016/S0167-6423(02)00030-8
[CLARK02][CLARK02] Clark, T., Evans, A., & Kent, S. (2002). A metamodel for package extension with renaming. UML 2002, 305–
320. Springer. doi:10.1007/3-540-45800-X_24
[KULK03][KULK03] Kulkarni, V., & Reddy, S. (2003). Separation of concerns in model-driven development. IEEE Software, 20(5),
64-69. doi:10.1109/MS.2003.1231154
le même nom alors il faudra aussi définir des règles de renommage pour éviter qu'ils ne soit
fusionné. Cet effet de bord est d'avantage préoccupant car des relations implicites peuvent être
établies alors qu'elles sont indésirables. On constate donc qu'il n'y a pas de séparation claire entre
les parties propres à l'artefact et les parties génériques (qui reçoivent la spécialisation). On a vu
dans la première section qu'il s'agit d'une mauvaise encapsulation.
2. Une limitation observée dans [CLARK02] concerne le manque de mécanisation de l'étape de
spécialisation car il faut définir manuellement une règle de renommage pour chaque élément à
renommer.
3. L'application du merge fait disparaître la frontière entre artefact et ensemble d'accueil. L'artefact
fait partie intégrante de l'ensemble d'accueil sans qu'il soit aisé de l'identifier ou de le retirer. Les
gains de modularisation de la méthode par fusion disparaissent lors de l'intégration.
Template
Dans [CLARK02], les templates sont proposés pour résoudre la limitation (2) vu plus haut. À défaut d'une
définition formelle des templates[Parr04], on remarquera des caractéristiques communes aux approches
templates :
• L'étape d'exportation comporte une étape de définition des points d'ancrages. En UML, c'est
l'élément TemplateSignature qui joue ce rôle et répond aux limitations (1) et (2) de la méthode de
merge. Cette définition des points d'ancrages permet une meilleure sélection et abstraction de
l'artefact en extrudant les parties qui ne sont pas propres à l'artefact et en les rendant génériques.
• La spécialisation de l'artefact s'appelle le binding et consiste à fournir les éléments d'ancrages
attendus.
Les formes d'intégrations varient. Parmi les usages actuelles, on peut identifier une première forme
d'utilisation qui consiste à appliquer les templates sur un modèle. L'application du template « peuple » le
modèle avec des éléments de modèles qui répondent à une préoccupation [MULLER06].
Figure X2 : Template avec intégration prévue par le méta-modèle (UML)
[MULLER06][MULLER06]
Muller, A. (2006). Construction de systèmes par application de modèles paramétrés. Université des
Sciences et Technologie de Lille. Retrieved from http://hal.archives-ouvertes.fr/tel-00459025/
VI. References
[ArPaWe03] Ardichvili, A., Page, V., & Wentling, T. (2003). Motivation and barriers to participation in virtual
knowledge-sharing communities of practice. Journal of Knowledge Management, 7(1), 64-77. doi:
10.1108/13673270310463626
[ASoC02] Brichau, J., Glandrup, M., Clarke, S., & Bergmans, L. (2006). Object-Oriented Technology. (Á.
Frohner, Ed.)ECOOP 2001 workshop reader No15, 2323, 107-130. Berlin, Heidelberg: Springer Berlin
Heidelberg. doi:10.1007/3-540-47853-1
[Bardou98] Bardou, D. (1998). Roles,Subjects and Aspects: How Do They Relate? In S. Demeyer & J. Bosch
(Eds.), Object-Oriented Technology: ECOOP’98 Workshop Reader (Vol. 1543, p. 63). Berlin, Heidelberg:
Springer Berlin Heidelberg. doi:10.1007/3-540-49255-0_124
[Bauer et al. 2005] Hans H. Bauer , Tina Reichardt , Anja Schüle, User requirements for location based
services, IADIS International Conference e-Commerce 2005, pages 211-218.
[Baus et al., 2005] Baus, J., Cheverst, K. and Kray, C. A Survey of Map-based Mobile Guides, in Map-based
Mobile Services, 2005, 193-209, DOI: 10.1007/3-540-26982-7_13, Springer-Verlag
[Benoit et al. 2000] Benoit, C., Martin, J.C., Pelachaud, C., Schomaker, L. and Suhm, B. Audio-visual and
Multimodal Speech-based Systems. In Handbook of Multimodal and Spoken Dialogue Systems: Resources,
Terminology and Product Evaluation, D. Gibbon, I. Mertins and R. Moore (Eds.), pp. 102-203, Kluwer.
[Bolt 1980] Bolt, R. Put-that-there: Voice and gesture at the graphics interface. SIGGRAPH Comput. Graph.
14, 3 (July 1980), 262-270. DOI=10.1145/965105.807503.
[Brown 2010] Brown, E. (ed) (2010) Education in the wild: contextual and location-based mobile learning in
action. A report from the STELLAR Alpine Rendez-Vous workshop series. University of Nottingham: Learning
Sciences Research Institute (LSRI). ISBN 9780853582649.
[Caplat 08] Caplat, G. (2008). Modèles et métamodèles. (Lausanne : Presses polytechniques et universitaires
romandes, Ed.).
[CHAOS 2009] CHAOS Summary 2009, Standish Group
[Chen et al., 2005] Chen, Y.-S., T.-C. Kao and Sheu, J.-P. (2005). “Realizing outdoor independent learning with
a butterfly-watching mobile learning system.” Journal of Educational Computing Research 33(4): pp 395-417.
[Clerckx et al. 2007] Clerckx T., Vandervelpen C., Coninx K., Task-based design and runtime support for
multimodal user interface distribution. In Proceedings of Engineering Interactive Systems, 2007.
[Constantini et al., 2005] Constantini, E., Pianesi, F. and Prete, M. Recognising Emotions in Human and
Synthetic Faces: The Role of the Upper and Lower Parts of the Face. In 10th International Conference on
Intelligent User Interfaces, pp. 20-27, San Diego, California.
[CSCORE11]http://www.comscoredatamine.com/2011/02/smartphone-adoption-increased-across-the-u-s-andeurope/
[Dij74] Dijkstra, E. W. (1982). On the role of scientific thought. Selected Writings on Computing: A Personal
Perspective (pp. 60-66). Springer-Verlag. Retrieved from http://www.cs.utexas.edu/~EWD/ewd04xx/
EWD447.PDF
[Doyle et al. 2008a] Doyle, J., Bertolotto, M. and Wilson, D. Multimodal Interaction - Improving Usability and
Efficiency in a Mobile GIS Context. In Proceedings of the First International Conference on Advances in
Computer-Human Interaction (ACHI '08). IEEE Computer Society, Washington, DC, USA, 63-68. DOI=10.1109/
ACHI.2008.18.
[Doyle et al. 2008b] Doyle, J., Bertolotto, M. and Wilson, D. A Survey of Multimodal Interfaces for Mobile
Mapping Applications, in Map-based Mobile Services, Lecture Notes in Geoinformation and Cartography, 2008,
146-167, DOI: 10.1007/978-3-540-37110-6_8, Springer-Verlag.
[Dransch 2005] Dransch, Doris, Activity and Context — A Conceptual Framework for Mobile Geoservices, in
Book “Map-based Mobile Services”, 2005, pages 31-42, Springer Berlin Heidelberg, ISBN : 978-3-540-26982-3
[Dumas 2010] Dumas B., Frameworks, Description Languages and Fusion Engines for Multimodal Interactive
Systems. PhD Thesis, University of Fribourg, Switzerland, 2010.
[ESRI 2007] Mobile GIS. WWW.ESRI.COM/MOBILEGIS
[Facer et al., 2004] Facer, K., R. Joiner, D. Stanton, J. Reid, R. Hull and Kirk, D. (2004). “Savannah: mobile
gaming and learning?” Journal of Computer Assisted Learning. 20: pp 399-409
[FrIs94] Frakes, W. B., & Isoda, S. (1994). Success factors of systematic reuse. IEEE Software, 11(5), 14-19.
doi:10.1109/52.311045
[Ghezzi02] Ghezzi, C., Jazayeri, M., & Mandrioli, D. (2002). Fundamentals of Software Engineering. (F. Arbab &
M. Sirjani, Eds.)Engineering (Vol. 5961, pp. 431-438-438). Prentice Hall PTR. doi:10.1007/978-3-642-11623-0
[GML 2011] Geography Markup Language, OpenGIS Geography Markup Language (GML) Encoding Standard
http://www.opengeospatial.org/standards/gml
[GpsMid 2011] GpsMid project, http://gpsmid.sourceforge.net/
[Jézéquel10] Jézéquel, J. M. (2010). Ingénierie Dirigée par les Modèles : du design-time au runtime. Génie
Logiciel - Ingéniérie dirigée par les modèles, 93. Retrieved from http://hal.archives-ouvertes.fr/
inria-00504666/
[Jokinen 2008] Jokinen , K. User Interaction in Mobile Navigation Applications, in Map-based Mobile Services,
Lecture Notes in Geoinformation and Cartography, 2008, 168-197, DOI: 10.1007/978-3-540-37110-6_9,
Springer-Verlag
[Kaasinen 2003] Kaasinen E., User needs for location-aware mobile services, Personal and Ubiquitous
Computing, Volume 7, Number 1, 70-79, 2003-05-20, Springer London, ISSN: 1617-4909
[Kiczales96] Kiczales, G. (1996). Beyond the black box: open implementation. IEEE Software, 13(1), 8, 10-11.
doi:10.1109/52.476280
[KLM+97] Kiczales, G., Lamping, J., Mendhekar, A., Maeda, C., Lopes, C. V., Loingtier, J.-M., & Irwin, J. (1997).
Aspect-oriented programming. Computer Science, 1241/1997, 220-242. doi:10.1007/BFb0053381
[Klopfer & Squire, 2008] Klopfer, E. and Squire, K. (2008). “Environmental Detectives – The Development of
an Augmented Reality Platform for Environmental Simulations.” Educational Technology Research and
Development 56: pp 203-228.
[Le Bodic et al. 2005] Le Bodic L., Approche de l'evaluation des systemes interactifs multimodaux par
simulation comportementale située, PhD Thesis, University of Western Brittany, France, 2005.
[Lehto et al. 2005] LEHTO L., SARJAKOSKI T., XML in Service Architectures for Mobile Cartographic
Applications, , in Book “Map-based Mobile Services”, 2005, pages 173-192, Springer Berlin Heidelberg, ISBN :
978-3-540-26982-3
[Letho 2011] Letho L., The ESDIN Project, presented in the GI Norden Conference, 9th of June, 2011
[Meng et al. 2003] Meng, L., and Reichenbacher, T.: Geodienste für Location Based Services, Proceedings 8.
Münchner Fortbildungsseminar Geoinformationssysteme, TU München, 2003
[Meng et al. 2005] Meng L., Zipf A., Reichenbacher T., Map-based mobile services: theories, methods and
implementations, Volume 1, Springer, 2005 - 260 pages
[Miller56] Miller, G. A. (1956). The magical number seven, plus or minus two: some limits on our capacity for
processing information. Psychological Review, 63(2), 81-97. doi:10.1037/h0043158
[Naismith et al., 2005] Naismith, L., Ting, J. & Sharples, M. (2005). CAERUS: A context aware educational
resource system for outdoor sites. CAL ’05 – Virtual Learning? University of Bristol, UK
[Nickel 2003] Nickel, K. and Stiefelhagen, R. Pointing Gesture Recognition based on 3D-tracking of Face,
Hands and Head Orientation. In 5th International Conference on Multimodal Interfaces, pp. 140-146,
Vancouver, Canada.
[OGC 2011] Open Geospatial Consortium, http://www.opengeospatial.org/
[OpenLS 2011] Location Service (OpenLS), OpenGIS Location Service (OpenLS) Implementation Standards,
http://www.opengeospatial.org/standards/ols
[OSGEO 2011] GIS Mobile Comparison, Feature Comparison,
http://wiki.osgeo.org/wiki/
GIS_Mobile_Comparison#Feature_Comparison
[OvNa02] Overstreet, C. M., & Nance, R. E. (2002). Issues in enhancing model reuse. International
Conference on Grand Challenges for Modeling and Simulation. San Antonio, Texas, USA. Retrieved from http://
www.thesimguy.com/GC/papers/WMC02/G108_OVERSTREET.pdf
[Oviatt 2003] Oviatt, S. Multimodal Interfaces. In Handbook of Human-computer Interaction, J. Jacko and A.
Sears (Eds.), pp. 286-304, New Jersey.
[Parnas72] Parnas, D. L. (1972). On the criteria to be used in decomposing systems into modules.
Communications of the ACM, 15(12), 1053-1058. doi:10.1145/361598.361623
[Parr04]Parr, T. J. (2004). Enforcing strict model-view separation in template engines. Proceedings of the 13th
conference on World Wide Web - WWW
'04 (p. 224). New York, New York, USA: ACM Press. doi:
10.1145/988672.988703
[Poole01] Poole, J. (2001). Model-driven architecture: Vision, standards and emerging technologies. Workshop
on Metamodeling and Adaptive, (April), 1-15. Retrieved from http://www.adaptiveobjectmodel.com/
ECOOP2001/submissions/Model-Driven_Architecture.pdf
[Qvarfordt et al., 2005] Qvarfordt, P. and Zhai, S. Conversing with the User based on Eye-Gaze Patterns. In
SIGCHI Conference on Human Factors in Computing Systems, pp. 221-230, Portland, Oregon.
[Paterno et al. 1997] Paternò F., Mancini C., Meniconi S., ConcurTaskTrees: A Diagrammatic Notation for
Specifying Task Models. In Proceedings of INTERACT, 1997. pp. 362-369.
[Reichenbacher 2004] Reichenbacher, T., Mobile Cartography - Adaptive Visualisation of Geographic Information on Mobile Devices, Dissertation, Department of Cartography, Technische Universität München,
München: Verlag Dr. Hut, 2004
[Snyder86] Snyder, A. (1986). Encapsulation and inheritance in object-oriented programming languages.
Conference proceedings on Object-oriented programming systems, languages and applications - OOPLSA ’86
(pp. 38-45). New York, New York, USA: ACM Press. doi:10.1145/28697.28702
[TOHS08] Tarr, P., Ossher, H., Harrison, W., & Sutton, S. M. (1999). N degrees of separation: multi-dimensional
separation of concerns. Proceedings of the 21st international conference on Software engineering - ICSE ’99
(pp. 107-119). New York, New York, USA: ACM Press. doi:10.1145/302405.302457
[Virrantaus et al. 2001] Virrantaus, K.; Markkula, J.; Garmash, A.; Terziyan, V.; Veijalainen, J.; Katanosov, A.;
Tirri, H.; , "Developing GIS-supported location-based services," Web Information Systems Engineering, 2001.
Proceedings of the Second International Conference on , vol.2, no., pp.66-75 vol.2, 3-6 Dec 2001
[WFS 2011] Web Feature Service, OpenGIS Web feature Service (WFS) Implementation Specification, http://
www.opengeospatial.org/standards/wfs
[WMS 2011] Web Map Service, OpenGIS Web Map Service (WMS) Implementation Specification, http://
www.opengeospatial.org/standards/wms
[Ye Lei et al. 2006] Ye Lei & Lin Hui, Which One Should be Chosen for the Mobile Geographic Information
Service Now, WAP vs. i-mode vs. J2ME?, Mobile Networks and Applications (2006), Volume 11, Issue 6, pages
901-915, Kluwer Academic Publishers
[Zimmermann et al. 2007] Zimmermann, A., Lorenz, A., & Oppermann, R. (2007). An Operational Definition of
Context. In Proceedings of 6th International and Interdisciplinary Conference, CONTEXT 2007, Kokinov, B.;
Richardson, D.C.; Roth-Berghofer, Th.R.; Vieu, L. (Eds.) Lecture Notes in Artificial Intelligence Vol. 4635, pp.
558-571.
[Rogers et al., 2004]
[Reid et al., 2004]
[Lonsdale, et al., 2005]