KDD Nuggets Index


To KD Mine: main site for Data Mining and Knowledge Discovery.
To subscribe to KDD Nuggets, email to kdd-request
Past Issues: 1996 Nuggets, 1995 Nuggets, 1994 Nuggets, 1993 Nuggets


KDD Nuggets 96:38, e-mailed 96-12-06

News:
* GPS, ComputerWorld on Drilling for Data,
http://www.computerworld.com/guide/961202drill.html
* EricT, ISOFT Agreement With Business Objects
* C. Matheus, Database Visualization and VRML,
http://www.best.com/~cyber23/virarch/article.html
* Francis, Query: First use of 'Data Mining' and 'Data Warehousing'?
Publications:
* M. Montgomery, Book: Probabilistic Expert Systems by Glenn Shafer
Siftware:
* W. Cohen, RIPPER learning system now available to researchers
http://www.research.att.com/~wcohen/ripperd.html
* M. Ankerst, VisDB, Visual Data Mining System,
http://www.dbs.informatik.uni-muenchen.de/dbs/projekt/visdb/visdb.html
Positions:
* E. King, VP, MARKETING SEGMENTATION MANAGER at
CoreStates Bank of Delaware
Meetings:
* C. Taylor, CFP: MLNET workshop: Learning In Changing Domains,
http://www.amsta.leeds.ac.uk/statistics/ecml97/dyn.htm
* Computational Finance at Oregon Graduate Institute,
http://www.cse.ogi.edu/CompFin/
--
Discovery in Databases (KDD) community, focusing on the latest research and
applications.

Submissions are most welcome and should be emailed,
with a DESCRIPTIVE subject line (and a URL, when available) to kdd@gte.com
To subscribe, email to kdd-request@gte.com message with
subscribe kdd-nuggets
in the first line (the rest of the message and subject are ignored).
See http://info.gte.com/~kdd/subscribe.html for details.

Nuggets frequency is approximately 3 times a month.
Back issues of Nuggets, a catalog of S*i*ftware (data mining tools),
and a wealth of other information on Data Mining and Knowledge Discovery
is available at Knowledge Discovery Mine site http://info.gte.com/~kdd

-- Gregory Piatetsky-Shapiro (moderator)

********************* Official disclaimer ***********************************
* All opinions expressed herein are those of the writers (or the moderator) *
* and not necessarily of their respective employers (or GTE Laboratories) *
*****************************************************************************

~~~~~~~~~~~~ Quotable Quote ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
New Translations to Some Computer Terms

PCMCIA People Can't Memorize Computer Industry Acronyms
ISDN It Still Does Nothing
APPLE Arrogance Produces Profit-Losing Entity
SCSI System Can't See It
DOS Defunct Operating System
BASIC Bill's Attempt to Seize Industry Control
IBM I Blame Microsoft
DEC Do Expect Cuts
CD-ROM Consumer Device, Rendered Obsolete in Months
OS/2 Obsolete Soon, Too.
WWW World Wide Wait
MACINTOSH Most Applications Crash; If Not, The Operating System Hangs

Contributed by Wally Beddoe to the Oracle Service Humour Mailing List.
(thanks to John Vittal)

Previous  1 Next   Top
>~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Date: Tue, 03 Dec 1996 18:07:59 -0500
From: Gregory Piatetsky-Shapiro (gps@gte.com)

see http://www.computerworld.com/guide/961202drill.html

By Wayne W. Eckerson



The market for decision-support tools is evolving at a dizzying
pace. Innovative tools hit the market each month, and existing
products are transforming themselves faster than Madonna.



Each tool was designed to help end users drill in to corporate
databases to collect data and view it from the various angles needed
to answer business questions.



Expect a market shakeout in the next two years in which the winners
swallow up niche products to offer versatile,multipurpose tool sets
that support an array of decision-support operations. A key
differentiator among tools will be their ability to support
high-performance, interactive queries across the World Wide Web.



see http://www.computerworld.com/guide/961202drill.html for full text


Previous  2 Next   Top
>~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Date: Wed, 27 Nov 96 09:58:13 +0100
From: erict@isoftfr.isoft.fr (Eric)

ISOFT ANNOUNCES LICENSING AGREEMENT WITH BUSINESS OBJECTS


Leading Data Mining Tools Vendor Provides Best of Breed Data Mining
Technology to Business Objects Integrated Query / Reporting and OLAP tools=
users


Paris, France =96 November 18, 1996: ISoft, leading Data Mining tools
provider, today announced the signing of a major software licensing
agreement with Business Objects. The agreement allows Business Objects to
integrate ISoft=92s ALICE v. 4.0 Data Mining package -standard version- into
its own product range. Under the name of BUSINESSMINER, Business Objects
will offer both a stand-alone and a fully integrated version of ISoft=92s=
ALICE.

With this licensing agreement, ISoft was chosen by Business Objects to act
as Data Mining technology provider and expand the range of functionality
they offer to their users. ISoft and Business Objects will work closely to
design and build future Data Mining tools to be marketed by Business=
Objects.


Availability

BUSINESSMINER will be available in production during the first quarter of
1997, on all Windows platforms, including Windows 95, Windows NT, and
Windows 3.1 (via Win32S).


About ISoft

ISoft has been a major provider of data mining solutions for the past 6
years, providing leading edge tools and applications to the most demanding
data owners throughout Europe. ISoft designs and markets a wide range of
Data Mining tools with two main product families: ALICE, for the desktop PC
end-users and AC2, UNIX and PC Data Mining toolkit for expert users and
server based Data Mining. AC2's unique 'knowledge modeling' feature
translates complex information into easily understood graphical models. AC2
is also available under the form of a set of libraries.

Launching ALICE, ISoft has made the power of data mining available to every
desktop PC user. ISoft's ALICE is a high profile data mining product for
exploring databases through interactive decision trees and creating queries,
reports, charts, and even rules for predictive models. Released in July
1996, ALICE has introduced major breakthroughs in terms of user-friendliness
for data mining on the PC.
ALICE is now available in the following versions :
ALICE Standard package, ALICE Enterprise Edition, ALICE 'What if?' Module,
ALICE Corporate Edition, on all Windows platforms, including Windows 95,
Windows NT, and Windows 3.1.


About Business Objects

Business Objects (NASDAQ:BOBJY) is the world's leading supplier of
integrated query, reporting, and OLAP tools. The company's flagship
product, BUSINESSOBJECTS, provides mainstream business users with access to
information stored in corporate databases, data warehouses, and packaged
applications.

Business Objects led the overall decision support tools market in 1995 with
software license revenues of $48.7M, and outsold its nearest query and
reporting tools competitor two-to-one according to IDC. Business Objects
products are in use at over 3,600 organizations in over 60 countries, and
the company has sold more than 400,000 licenses around the world.


AC2 and ALICE d'ISoft are trademarks of ISoft S.A=20
BUSINESSOBJECTS is a trademark of Business Objects S.A. Other company and
product names may be trademarks of the respective companies with which they
are associated.


Previous  3 Next   Top
>~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Date: Tue, 26 Nov 1996 10:26:20 -0500
From: 'Christopher J. Matheus' (matheus@gte.com)
Subject: Database Visualization and VRML

http://www.best.com/~cyber23/virarch/article.html

cyber23 :
Virtual Architecture
:Database Visualization






SQL2VRML - entry space


There is allot of talk in the VR community about the concept
of "database mining", in which a user might extract
information from the database and then "surf" through
it as one might navigate through the WWW. The power of this
concept is extended when the tables that are returned in the
query create visual representations using the scene description
language VRML.




Netscape - Table Index Page




WebSpace - Table Index Space


This particular example uses a database called
"PeopleDB" which contains employee, mailstop, and
department info and therefore contains little scalar data. Even
with one or two scalar fields, visualization can help the user
understand the request by representing the data in a topological
and morphological form. The data becomes a landscape where
differences between indexed objects become obvious quickly.


The advent of dynamic database visualization could assist in
understanding information extracted from queries, and has great
potential in the fields of chemistry, medicine, statistics,
mathematics, finance, investment, real-estate and almost any area
of study where extraction produces large datasets.




"On The Fly" world building




PeopleDB - Primary Query Page


One of the goals of this exercise was to rely on an existing
infrastructure of market products to create an information
visualization system. This example is using a Sybase10 SQL server
working in conjuction with Netsite with a SyPerl client which is
making queries and building the VRML worlds. There are two
positive results from this architecture. The first is that any
user that has the Netscape and Webspace products can access this
application at any point of development because everything is
executed on the server side. The second is that the data is
represented "on the fly" so the tables and
visualizations are always created from direct queries at the time
of viewing.




SQL2VRML Query:

select deptname,respext,deptnum from dept where division =
'ADMIN' order by deptnum


Here the user has queried the database to count the number
records grouped by a particular field. The representation of that
query is a table of columns extracted from the query and a 3d
representation of that table. Each object is an abstraction of
what one record has returned from the query.




Index Attributes




Index Attributes - Visualizing a Record


Table representations are created using data directly
extracted from tables, that data creates a semantic map to
information that is specified within the query. Information
extracted changes the shape, size and color of the object that
represents its dataset.


Any object within a particular dataset will have identical
index attributes. A consistency of indexes to size, shape or
color will give the user a reference to what data is semantically
effecting the visualization. It should be the case that the field
that effects size for instance, will effect size throughout the
dataset.




Self Replicating Scripts


When the user makes a primary request to the table based on a
field, each result itself becomes a query of that subset. This is
achieved by writing perl scripts that pass along the required
data from the previous script to write a new one. This new script
executes the new query, taking the user to the next subset of
information, in this way the user "surfs" through the
queries.



Self replicating scripts pass attributes along to their
children without destroying their parents, allowing queries to
grow in granularity by attributes of the query passed by it
parents.




Architectural Metaphor


The architectural metaphor serves as a natural point of
reference from which the user can read the data. The
"space" articulates the domain in which a particular
query has taken place. All data extracted from the query resides
within the architectural domain. The space also infers scale and
speaks to the spatial domain of optimum interaction. In this way
the archetypal elements of wall, floor, datum and column guides
the user through the data intuitively without forcing with a
modal interface.


The architectural domain is created from spatial archetypes
such as floor, roof, wall and datum, and provide a spatial point
of reference that distinguishes the data from the set it resides
within. By creating a space the user intuitively knows that no
data within the dataset resides outside of the space.


height='164'>

Dataset Domain - Architectural Metaphor


Wall

The primary bounding object but it also defines openness and
closure of the space. At human scale primary interaction occurs
within the bounding of this element.


Floor

Secondary bounding object defining difference between the user
and sky. Floor is usually the translation point for primary field
data where the primary index field translates the data on the Z
axis.


Datum

Also serves as a bounding object without the breakage of wall,
but the primary intent of datum is to define the scale of primary
interaction.


Column

The primary vertical element. Column is very good at describing
the scale of interaction, while doing very little to define the
domain of the dataset.




Morphology and Topology


Horizon is inherited in the concept of perspective, and with a
horizon line the even small differences in a dataset are read
easily. Much as one might see a small ship on the ocean, in a sea
of data differences give the user visual landmarks to navigate
to.




Visualizing the Dataset - Morphology and Topology


The topology of the dataset is the criteria that defines the
domain of a particular query, it is the rules that make the
landscape. The morphology of the dataset is the criteria that
defines the representation of the objects within that dataset, it
is the rules that make the objects. For instance the data that
defines the parameters of a query would create the topology or
landscape, in which a dataset would be returned, whereas the data
that is extracted from that query would create the morphology or
objects on the landscape.




Conclusion


This demonstration was created using "off-the-shelf"
products using infrastructure that is common in today's corporate
environment. The accessibility of the tools and products required
to create convincing visualizations of SQL databases exists
today. The primary issue at hand is how the data`s representation
is designed in a way that creates "meta-information" or
information that is gained about the information itself. This is
the greatest potential of "database mining", that we
may learn something about the information itself, a whole that
becomes greater than its parts. Without good design and real
consideration about the interaction issues, database
visualization will be little more that a 3d table.




Clay Graham - cyber23@best.com





Previous  4 Next   Top
>~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Date: Sun, 03 Dec 1995 10:40:29 +0800
From: Francis (s703@pdns.nudt.edu.cn)

Dear Sir:
Would you like tell me when and where someone mention 'Data Mining' and
'Data Warehousing' first.Thanks a lot!

[P.S. -- I guess the first use of data mining was sometime about
1990, but I would be curious to know.
I believe Bill Inmon coined the term Data Warehousing
in late 1980-s. If you have better information, please email
to kdd@gte.com and I will summarize to the list. GPS]

Previous  5 Next   Top
>~~~Publications:~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

From: montgomery@siam.org
Date: Tue, 03 Dec 96 08:42:22 EST

Recently published from SIAM

Probabilistic Expert Systems
Glenn Shafer
CBMS-NSF Regional Conference Series in Applied Mathematics 67

Probabilistic Expert Systems emphasizes the basic computational
principles that make probabilistic reasoning feasible in expert
systems. The key to computation in these systems is the modularity of
the probabilistic model. Shafer describes and compares the principal
architectures for exploiting this modularity in the computation of
prior and posterior probabilities. He also indicates how these similar
yet different architectures apply to a wide variety of other
problems of recursive computation in applied mathematics and
operations research.

The field of probabilistic expert systems has continued to flourish
since the author delivered his lectures on the topic in June 1992, but
the understanding of join-tree architectures has remained missing from
the literature. This monograph fills this void by providing an
analysis of join-tree methods for the computation of prior and
posterior probabilities in belief nets. These methods, pioneered in
the mid to late 1980s, continue to be central to the theory and
practice of probabilistic expert systems. In addition to purely
probabilistic expert systems, join-tree methods are also used in
expert systems based on Dempster-Shafer belief functions or on
possibility measures. Variations are also used for computation in
relational databases, in linear optimization, and in constraint
satisfaction.

This book describes probabilistic expert systems in a more rigorous
and focused way than existing literature, and
provides an annotated bibliography that includes pointers to
conferences and software. Also included are exercises that will help
the reader begin to explore the problem of generalizing from probability
to broader domains of recursive computation.


About the Author
Glenn Shafer is a Professor in the Department of Accounting and
Information Systems in the Faculty of Management at Rutgers
University. His contributions to the foundations of probabilistic and
causal reasoning include his work on Dempster-Shafer theory and more
recent work on causal conjecture.

1996 / viii + 80 pages / Softcover / ISBN 0-89871-373-0
List Price $24.50 / SIAM/CBMS Member Price $19.60 / Order Code CB67

For more information or ordering, contact:
SIAM
3600 University City Science Center, Philadelphia, PA 19104
215-382-9800; fax 215-386-7999; siam@siam.org; http://www.siam.org


Previous  6 Next   Top
>~~~Siftware:~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Date: Tue, 26 Nov 1996 11:14:51 -0500 (EST)
From: William Cohen (wcohen@research.att.com)
Subject: RIPPER learning system now available to researchers

I'm making the most recent version of my rule learning system RIPPER
available to researchers. The relevant URL is

http://www.research.att.com/~wcohen/ripperd.html

RIPPER is a fast, highly noise tolerant rule learner, originally aimed
at learning problems involving very large noisy datasets. It also
contains some extensions that make it convenient to use in learning to
classify text.

For more details on the RIPPER algorithm you can look at

- 'Fast effective rule induction' from ML95
www.research.att.com/~wcohen/postscript/ml-95-ripper.ps

- 'Learning trees and rules with set-valued features', from AAAI-96
www.research.att.com/~wcohen/postscript/aaai-96.ps

For results on using RIPPER for text classification you can also look
at

- 'Context-sensitive learning methods for text categorization' (with
Yoram Singer), from SIGIR-96
www.research.att.com/~wcohen/postscript/sigir-96.ps

- 'Learning rules that classify e-mail', from AAAI Spring Sym 1996
www.research.att.com/~wcohen/postscript/aaai-ss-96.ps

- 'Learning to query the web' (with Yoram Singer), from a AAAI-96 workshop
www.research.att.com/~wcohen/postscript/aaai-ws-96.ps

- 'Learning to classify English text with ILP methods', from ILP-95,
www.research.att.com/~wcohen/postscript/ilp.ps


William Cohen

AT&T Labs-Research
600 Mountain Avenue
Room 2A-427
Murray Hill, NJ 07974

email: wcohen@research.att.com
WWW: http://www.research.att.com/~wcohen/

Previous  7 Next   Top
>~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Date: Wed, 20 Nov 1996 17:40:13 +0100
From: Mihael Ankerst (ankerst@informatik.uni-muenchen.de)
Subject: new software for your list

Software: VisDB


*URL:
http://www.dbs.informatik.uni-muenchen.de/dbs/projekt/visdb/visdb.html

*Description:
A Visual Data Mining and Database Exploration System
*Discovery tasks: Visualization,
Classification, Clustering, Dependency analysis
*Comments:
*Platform(s): HP-UX (LINUX version planned)

*Contact:
Daniel A. Keim
Ludwig-Maximilians-Universit=E4t M=FCnchen
Lehr- und Forschungseinheit f=FCr Datenbanksysteme
Oettingenstra=DFe 67
D-80538 M=FCnchen
Bundesrepublik Deutschland

phone: +49-89-2178-2225

fax: +49-89-2178-2192

email: keim@informatik.uni-muenchen.de
*Status: Research Prototype
*Source of information: developer, publication


Previous  8 Next   Top
>~~~Positions:~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Date: Tue, 26 Nov 1996 17:12:02 -0500
From: eric@ahcsun1.heuristics.com (Eric King)
Subject: Job Posting for VP, Marketing Segmentation Manager

VP, MARKETING SEGMENTATION MANAGER

CoreStates Bank of Delaware, the credit card subsidiary of CoreStates
Financial Corp, has an opening for a VP, Marketing Segmentation Manager.

Reporting to the Director of Consumer Card Marketing, this individual will
be responsible for identifying and evaluating high profit prospect and
account holder segments to enhance acquisition, activation and retention
methodologies. Role involves extensive analysis and the use of a wide
variety of predictive modeling and data mining tactics to maximize program
effectiveness and profitability. You will also supervise our Database
Manager, responsible for prospect database development and maintenance of
customer database. To qualify, you must have 5-7 years experience in
analytical and predictive modeling techniques, preferably in a credit card
direct response or consumer database capacity. Multidimensional project
and supervisory experience as well as the ability to work effectively
across business lines and levels is also required. Bachelor's degree in
quantitative concentration or marketing is required. Applicants must be
proficient in SAS, spreadsheet and predictive modeling packages.

CoreStates Bank of Delaware offers competitive salaries and excellent
benefits. We support the Delaware Clean Air Act. To be considered, please
forward your resume by US Mail only, including salary requirements to Human
Resources, CoreStates Bank of Delaware NA, 3 Beaver Valley Road, Wilmington,
DE 19803. An equal opportunity employer.


Previous  9 Next   Top
>~~~Meetings:~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Date: Mon, 2 Dec 1996 10:37:45 GMT
From: charles@amsta.leeds.ac.uk (Charles Taylor)
**************************************************************************
CALL FOR PAPERS

MLNET FAMILIARIZATION WORKSHOP

26th April 1997

LEARNING IN DYNAMICALLY CHANGING DOMAINS:
THEORY REVISION AND CONTEXT DEPENDENCE ISSUES.
**************************************************************************

Up-to-date information will be kept at:
http://www.amsta.leeds.ac.uk/statistics/ecml97/dyn.htm


**********
BACKGROUND
**********

In traditional Machine Learning the available examples (the training data)
are usually used to learn a concept. In many practical
situations in which the environment changes, this procedure ceases to work.
Concerning the case of supervised learning, generally speaking, the
application of a discrimination algorithm to classify new, unseen examples
will be problematic if one of the following events occurs after the
``learning phase'':
* The number of attributes changes
* The number of attributes remains the same but the interpretation of
the records of the datasets changes over time
* A description of a concept (class) exists but there are additional databases
relating to the given concept (class) that may modify (refine) the existing
knowledge base


In the case of supervised learning (classification or prediction),
the extracted rules or dependencies can be no longer valid over
time due to different changes which can occur. However, this statement is
probably valid for unsupervised and reinforcement learning as well.

**************
THE CHALLENGE:
**************

There have been some attempts in the literature to address the problem of
structural change in concepts and the dynamic aspects of data in Machine
Learning. But most of the ML-algorithms can still not deal with this problem.
Research in this direction has very important practical implications because
structural change in concepts occurs often in the real-world domain.
By considering this issue Machine Learning has a better chance of acceptance
in industry and commerce.


There are many contributions that Statisticians have (already) made to this
field, but their communication is hampered by a different terminology, for
example:
ML Statistics
-- ----------
Context Learning Parameter estimation in multivariate regression
Dynamic Learning Structure Change (econometrics)
Theory Revision, Knowledge Integration Hypothesis testing

So, the question is:

Have the Statisticians done everything; What is the challenge for ML?

***************
RELEVANT TOPICS:
***************

The ideal papers would be those which discuss (preferably two or more)
learning aspects (from machine learning, Statistics and Neural Nets) in the
following areas:
* Structural change
* CUSUM tests/ Quality control
* Dynamic learning, dynamic models
* Incremental learning, Sequential learning
* Context learning
* Theory revision
* Knowledge Integration

****************
AIMS AND PROGRAM
****************

It is the aim of this workshop to :

* Bring ML and statistics researchers together
* Discuss the state of art of structural change in concepts. This discussion
should not involve only symbolic ML but, specially, the statistical ML as
well. Relevant contributions using Neural Networks are also welcome.
* Discuss the direction for further research in the structural change in
concepts bearing in mind that the main goal is solving real-world problems.

The program (April 26) will include invited talks, presentations of accepted papers (both verbal and poster presentations). All of the
contributions will be summarized by a member of the organizing committee in
a talk. The contributions (including the invited talks) will be distributed
as workshop notes.

***********
ORGANIZERS
***********
Gholamreza Nakhaeizadeh (Daimler-Benz, Germany)
Charles Taylor (University of Leeds, UK)
Ivan Bruha< (McMaster University, Canada)


*********************
SUBMISSION OF PAPERS:
*********************

Two kinds of submissions are solicited: full papers describing substantial
completed research or applications, and poster papers reporting on work in
progress. Submissions must be clearly marked as one of these two kinds. The
program committee may decide to move accepted contributions from the full
paper to the poster category.

The size limit for submissions is 12 pages for full papers, 5 pages for poster
papers (excluding title page and bibliography, but including all tables and
figures).

It is hoped that selected contributions will be subsequently published in an
integrated volume. Submitted papers should preferably be formatted according
to the LNAI guidelines (LaTeX style files are available at
http://is.vse.cz/ecml97/styles.htm. Authors are encouraged to make their
papers available in advance (by anonymous ftp or a URL site) so that wider
discussion is possible. In future the above page will provide links to such
papers.

A separate title page must contain the title of the paper, the names and
addresses of all authors, up to three keywords, and an abstract of max. 200
words. The full address, including phone, fax, and e-mail, must be given for
the first author (or the contact person).

The following items must be submitted by February 15, 1997:
Either a camera-ready copy of the paper or a PS (uuencoded) file, together
with an electronic version of the titlepage only (plain ASCII).
Send submissions, enquiries, etc. to:
Gholamreza Nakhaeizadeh (ECML-97)
Daimler Benz AG
Research and Technology
Postfach 2360
D-89013 Ulm
Germany
e-mail: nakhaeizadeh@dbag.ulm.DaimlerBenz.COM


Papers will be evaluated with respect to relevance, technical soundness,
significance, originality, and clarity. Papers reporting on real-world
applications will be evaluated according to special criteria.

*************************************
REGISTRATION AND FURTHER INFORMATION:
*************************************
The workshops will be open to anyone. Participants who are not members
of MLnet pay a fee to cover the marginal costs of the workshop. The fee
is yet to be determined. MLnet will pay the organisational costs for
its members.

MLnet will arrange travel bursaries for its members to take part in the
workshops

For information about paper submission and program, contact the program chair. For information about local arrangements, registration forms, etc. contact the
local organizers at actionm@cuni.cz


****************
IMPORTANT DATES:
****************
Submission deadline: 15 February 1996
Notification of acceptance: 8 March 1997
Camera ready copy: 1 April 1997
Workshop: 26 April 1997


Previous  10 Next   Top
>~~~Meetings:~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Date: Tue, 26 Nov 1996 11:54:34 -0800 (PST)
From: Computational Finance (compfin@cse.ogi.edu)
Subject: Computational Finance at the Oregon Graduate Institute

COMPUTATIONAL FINANCE at the Oregon Graduate Institute of
Science & Technology (OGI)

Masters of Science Concentrations in
Computer Science & Engineering (CSE)
Electrical Engineering (EE)

Now Reviewing MS Applications for Fall 1997!
Early Decision Deadline: January 15 (Decisions by February 15)
Final Deadline: March 15 (Decisions by April 15)

New! Certificate Program Designed for Part-Time Students.

For more information,
call the OGI Office of Admissions (503)690-1027, or visit
http://www.cse.ogi.edu/CompFin/
=======================================================================

Computational Finance Overview:

Advances in computing technology now enable the widespread use of
sophisticated, computationally-intensive analysis techniques applied
to finance and financial markets. The real-time analysis of
tick-by-tick financial market data, and the real-time management
of portfolios of thousands of securities is now sweeping the
financial industry. This has opened up new job opportunities for
scientists, engineers, and computer science professionals in the
field of Computational Finance.

The strong demand within the financial industry for technically-
sophisticated graduates is addressed at OGI by the Masters of
Science and Certificate Programs in Computational Finance. Unlike
a standard two year MBA, the programs are directed at training
scientists, engineers, and technically-oriented financial professionals
in the area of quantitative finance.

The Masters programs lead to a Master of Science in Computer Science
and Engineering (CSE track) or in Electrical Engineering (EE track).
The MS programs can be completed within 12 months on a full time
basis. In addition, OGI has introduced a Certificate program
designed to allow professionals in engineering and finance a way
of acquiring skills or upgrading their skills in quantitative finance
on a part-time basis.

The Computational Finance MS concentrations feature a unique
combination of courses that provide a solid foundation in finance
at a non-trivial, quantitative level, plus training in the essential
core knowledge and skill sets of computer science or the information
technology areas of electrical engineering. These skills are
important for advanced analysis of markets and for the development
of state-of-the-art investment analysis, portfolio management,
trading, derivatives pricing, and risk management systems.

The MS in CSE is ideal preparation for students interested in
securing positions in information systems in the financial industry,
while the MS in EE provides rigorous training for students interested
in pursuing careers as quantitative analysts at leading-edge
financial firms.

The curriculum is strongly project-oriented, using state-of-the-art
computing facilities and live/historical data from the world's
major financial markets provided by Dow Jones Telerate. Students
are trained in using high level numerical and analytical packages
for analyzing financial data.

OGI has established itself as a leading institution in research
and education in Computational Finance. Moreover, OGI has very
strong research programs in a number of areas that are highly
relevant for work in quantitative analysis and information systems
in the financial industry.

-----------------------------------------------------------------------
Admissions
-----------------------------------------------------------------------

Applications for entrance into the Computational Finance MS programs
for Fall Quarter 1997 are currently being considered. The deadlines
for receipt of applications are:
January 15 (Early Decision Deadline, decisions by February 15)
March 15 (Final Deadline, decisions by April 15)

A candidate must hold a bachelor's degree in computer science,
engineering, mathematics, statistics, one of the biological or
physical sciences, finance, econometrics, or one of the quantitative
social sciences. Candidates who hold advanced degrees in these
fields or who have experience in the financial industry are also
encouraged to apply.

Applications for the Certificate Program are considered on an
ongoing basis for entrance in any quarter.

----------------------------------------------------------------------
Contact Information
----------------------------------------------------------------------

For general information and admissions materials:

Office of Admissions
Oregon Graduate Institute
P.O.Box 91000
Portland, OR 97291-1000

E-mail: admissions@admin.ogi.edu
Phone: (503)690-1027
WWW: http://www.cse.ogi.edu/CompFin/

For special inquiries:

E-mail: compfin@cse.ogi.edu

Previous  11 Next   Top
>~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~