32
The Magazine for the Rocky Mountain Oracle Users Group • Vol 67 • Summer 2012 Stewart Bryson Agile Data Warehousing Peter Koletzke & Duncan Mills Oracle WLS Application Security Rocky Mountain Oracle Users Group PO Box 621942 Littleton, CO 80162 Change Service Requested Non-Profit Organization U.S. Postage Paid San Dimas, CA Permit No. 410 Member Focus - David Peake Board Focus - John Jennuette Mike Garcia Leverage Oracle AQ Member Photos Welcome to Colorful Colorado

Peter Koletzke & Duncan Mills - RMOUG

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

The Magazine for the Rocky Mountain Oracle Users Group • Vol 67 • Summer 2012

Stewart BrysonAgile Data Warehousing

Peter Koletzke & Duncan MillsOracle WLS Application Security

Rocky Mountain Oracle Users GroupPO Box 621942Littleton, CO 80162

Change Service Requested

Non-ProfitOrganization

U.S. Postage PaidSan Dimas, CAPermit No. 410

Member Focus - David PeakeBoard Focus - John Jennuette

Mike GarciaLeverage Oracle AQ

Member PhotosWelcome to Colorful Colorado

2 SQL>UPDATE • Summer 2012

Learn more about our

students, graduates and faculty.

CPS.Regis.edu/beinfluential

EXPLORE IN-DEMAND DEGREES | REQUEST ADDITIONAL INFORMATION | APPLY NOW

CPS.Regis.edu/beinfluential | 1.800.659.6810 |

> CLASSES START SOON

Define your

INFLUENCE

BE INFLUENTIAL.

ON YOURSCHEDULE

As an accredited university ranked by U.S.News & World Report, busy adults choose Regis University for the academic rigor, and our College for Professional Studies for the flexibility.

> Accelerated 5- and 8-week classes> Online and campus-based learning> Multiple start dates throughout the year

CHANGE THEWORLD

Todd uses his degree to lead by example. He shows young kids how to build influence, even with LEGO® robotics. How will you be influential?

A SCHOOL OFINFLUENCE

The School of Computer & Information Sciences in Regis University College for Professional Studies uses state-of-the-art learning environments, experienced faculty and strong industry relationships to teach students the latest technologies and software.

Please Welcome Our New Board Members

David Fitzgerald

It has been my goal as a DBA to contribute to the community as a whole and help dispel rumor and speculation by providing facts backed up by examples, thus improving the quality of the database-related information on the Internet. I have taught classes at client sites to educate those who would be using the database on a daily or weekly basis.

My involvement in our industry spans twenty-two years and includes a diverse and extensive background in database administra-tion. I have provided services to several major clients including American Airlines, SiriusXM Radio and the U.S. Postal Service, among others. I have also contributed to the knowledge-base of the Oracle community through participation in the comp.databases.server heirarchy of newsgroups, the Club Oracle forums, Internet Relay Chat (IRC) channels (as an operator in the #oracle and #sql channels on DalNet and Undernet), articles on databasejournal.com and my own blog (http://oratips-ddf.blogsppot.com).

Art Marshall

Having been a volunteer at RMOUG Training Days and an RMOUG website volunteer technician, I am poised to take on addi-tional responsibilities to assist with the planning and the direction of the group for the coming year. Being involved with RMOUG during the past several years has been a great knowledge gaining, rewarding experience for me, and I wish to contribute my efforts to the good and growth of the organization in return.

I have worked in the IT industry since 1980, having used various database systems from then on, including my first exposure to Oracle in 1991. My Oracle responsibilities started with platform migration, then query and Forms development, and then those of a database administrator since 1997, in both production and development environments, including those of non-clustered, Data Guard, and RAC. Versions that I have been involved with include 6, through 11. With this long-term involvement in IT, I have a strong com-mitment to continually expand my knowledge of important Oracle database technologies, and to facilitate the usage and sharing of that knowledge with others.

SQL>UPDATE • Summer 2012 3

On the Cover: Bryan Wells captured this beautiful shot at the Idaho Springs Reservoir the later part of last sum-

mer, once the snow pack finally receded. The peak directly in the center is Mt. Spalding, with Mt. Bierstadt to the left. Bryan likes to hike and fish the high country because of the views, and the tranquility. It beats the buzz of the laptop and the flicker of the screen. On this day he was using black ants and caddis with his fly rod to pick up a lot of cruisers most of the late morning and into the afternoon. But, as luck would have it, the beer was calling.

Bryan is currently an Application DBA for Charles Schwab, responsible for making sure the development environments are functioning properly with the current software release. This includes the Oracle 10g/11g back end databases. He has been an RMOUG member on and off since 2001 when he took his career into the Oracle realm as a pl/sql developer. He is a proud graduate of Regis University.

c o n t e n t s

Editor & dirEctorPat Van Buskirk

[email protected]

SUBScriPtioNS & tEchNical adviSor

Heidi [email protected]

coNtriBUtiNg WritErSStewart Bryson

Mike GarciaPeter KoletzkeJohn JeunnetteDuncan MillsDavid Peake

SQL>Update is published quarterly by

Rocky Mountain Oracle Users Group5409 S. Union CourtLittleton, CO 80127

303-948-1786 Email: [email protected]

www.rmoug.org

Please submit advertising and editorial mate-rial to the [email protected]. RMOUG reserves the right to edit, revise or reject all mate-rial submitted for publication. Liability for errors in ads will not be greater than the cost of the advertising space. ©2011 Rocky Mountain Oracle Users Group

features

departments

2 Board of Directors Welcome to Our New Board Members

16 Photos From Our Members Wonderful Reader Submissions

24 RMOUG Member Focus David Peake

26 RMOUG Board Focus John Jennuette

monthly features

4 Letter From Our President5 Stan Yellott Scholarship Fund9 RMOUG DBLabs28 Advertising Rates29 RMOUG Board of Directors30 RMOUG Calendar of Events31 August 2012 QEW31 Index To Advertisers

A special Thank You to Heidi Kuhn, Tim Gorman and Kellyn Pot’Vin, without whose continuing help, this publication would not be possible.

18 Agile Data Warehousing With Exadata and OBIEE by Stewart Bryson The Case for Extreme BI

6 Leverage Oracle AQ by Mike Garcia To Design Event Driven Applications

12 Oracle WLS Application Security by Peter Koletzke & Duncan Mills Implementing the Superstition in JDeveloper

4 SQL>UPDATE • Summer 2012

RMOUG’s mission has always been educational, as in continuing education for IT professionals in Colorado. This mission is embedded in everything we do, as our annual conference moves past 20 years and our quar-terly meetings provide great educational value and net-working every few months. To my knowledge, RMOUG is the only Oracle users group in the world with the status of a charitable non-profit corporation. That sta-

tus was difficult to obtain and took many years, first applied for almost 10 years ago, and only achieved a few years ago.

So, because of the annual “Training Days” conference in February, the quarterly educational workshops (QEWs) in May, August, and November, and the DBLab Meetups with Regis University, continuing education for those already in the IT indus-try is the heart of our mission.

But what about students trying to enter the IT industry?Starting more than 10 years ago, Stan Yellott, a long-time

RMOUG leader, began paying the way for high-school students at Pine Creek High School in Colorado Springs CO to come to the “Training Days” conferences out of his own pocket. After Stan’s passing in late 2006, RMOUG established the Stan Yellott Scholarship Fund (SYSF) to help fund the education of high school, under-graduate, and graduate students from Colorado in the IT industry. This past year alone, the Stan Yellott Scholarship Fund has provided four $1,000.00 scholarships to area students and enabled twelve Pine Creek High School students and their teacher to attend the “Training Days 2012” conference.

So, in addition to providing continuing education for Colorado IT professionals, another part of RMOUG’s mission is to provide scholarships to Colorado students seeking to enter the IT industry. Two very similar, but different, missions.

SYSF is not yet a big scholarship fund, and RMOUG is work-ing toward increasing the volume of funds backing the scholarships so that we can disburse more and larger scholarships. Most years, if the “Training Days” conference earns money, then that money is steered into SYSF, as is the case this past year when outgoing Training Days 2012 director John Jeunnette led RMOUG to a $16,000 profit.

Still, the Training Days conference does not always turn a profit, and there have been plenty of years where it has lost a significant amount of money; but that is the nature of such a large event. So, we cannot count on profits from the Training Days conferences to fund SYSF every year, and that means we need to identify another source of funding.

And perhaps that is where the two very similar, but different, educational missions of RMOUG intersect...

From The President Tim GormanOn the one hand, we have a community of IT professionals •seeking focused, advanced continuing education, and as professionals, able to pay.On the other hand, we have a community of nascent IT •professionals who are still in school, needing exposure to industry events and scholarships for academic training in the basics of computing.

By partnering with IT training vendors to provide focused and advanced IT training seminars in Colorado, we can accomplish both of those missions.

In June, RMOUG is working with Speak-Tech, a Minnesota-based training organization, to bring seminars by Oracle PL/SQL expert Steven Feuerstein and Oracle internals expert Jonathan Lewis to the Front Range. These seminars are not free, as both Mr Feuerstein and Mr Lewis develop their curriculum themselves and expend the time and effort to travel to Colorado. But the arrangement with Speak-Tech provides funding for the Stan Yellott Scholarship Fund in several ways...

RMOUG members receive a substantial discount to these 1. seminars over non-members

For the 2-day Feuerstein seminar, RMOUG members •pay $800, non-members pay $900 - a discount of $100For the 1-day Lewis seminar, RMOUG members pay •$300, non-members pay $350 - a discount of $50

Mr Feuerstein and Mr Lewis are offering attendees a free 2. copy of their most recent books

Attendees have the option to donate the cost of the book •to SYSF instead

Speak-Tech will match that donation for each attend-* ee choosing to donate

In addition to the these donations, for every attendee 3. beyond a count of thirty (30), Speak-Tech will donate another $100 to SYSF

So, in the event that a seminar attracts 40 attendees, 20 of whom opt for the SYSF donation instead of a book, SYSF could stand to earn almost $2,200.00 in donations from each of these Speak-Tech seminars!

If other training vendors wish to work with RMOUG, and the more the merrier, then RMOUG will be glad to offer similar arrangements, helping our dual educational missions intersect.

In time, these seminars will hopefully increase the availabil-ity of focused and advanced continuing IT education for RMOUG members, beyond the RMOUG newsletter “SQL>UPDATE”, beyond the Training Days conferences, beyond the QEWs, and beyond the Regis/DBLab meetups. And these seminars can augment the Training Days conference as a source of funding for the next genera-tion of Colorado IT professionals.

Please let us know what you think about our educational mis-sion, and how we can better enhance the quality and quantity of educational options?

Have a wonderful summer!

SQL>UPDATE • Summer 2012 5

Stan Yellott Scholarship Fund

RMOUG Scholarship Mission

To provide educational opportunities to members of the organi-zation about the information technology industry in general, and in particular the technology of Oracle Corporation to include databas-es, storage, networking and application development, specifically the products and services of the Oracle Corporation.

To collect, post and distribute information about Oracle tech-nologies and other related technologies to members.

To provide members with the ability to help their peers maxi-mize their knowledge and skills working with products in informa-tion technology and in particular Oracle products.

To provide a consolidated channel of communication, convey-ing needs, concerns, and suggestions, for members of the organiza-tion, to Oracle Corporation and other vendor corporations involved with Oracle related technology.

To encourage members to present their information technology experiences using Oracle and other products and services.

To provide a consolidated channel of communication between members of the RMOUG and other communities in related informa-tion technology industries.

To promote educational opportunities for students of informa-tion technology through directed funding and services for educa-tional purposes.

RMOUG is committed to supporting others in the pursuit of technical knowledge.

The Scholarship Fund started in 2001 to encourage future IT professional in their efforts to broaden their knowledge. In 2007, RMOUG voted to rename the scholarship fund to honor the memory of Stan Yellott. Stan was a long time member of RMOUG where he supported the user community by serving on the RMOUG board. Stan focused on expanding Oracle educational opportunities. Stan’s vision was to include high school and college students as the next generation of IT professionals.

Stan Yellott Scholarship Fund

Eligibility Requirements

Registered or the intent to register at an accred-•ited post secondary educational institution in the United StatesMinimum GPA 2.5 from a current official tran-•scriptCurrently enrolled in a computer science/informa-•tion technology class and/or intends to enroll in a computer science the following term

Award notification will be given within 45 days follow-ing application deadlines. Upon acceptance of the scholar-ship, additional information may be required.

For Details, Visit the RMOUG Website

SYSF RecipientMike Herder

As a previous winner of the Stan Yellott Scholarship, I can’t begin to express how much RMOUG has helped me meet my end goal. I am a first generation student, which means I am the first out of my family tree to go to college. Majoring in three engineering related areas, computer science/engineering, mechanical engineering, and applied mathematics wasn’t an easy task. Next summer should cul-minate my academic career.... As the winner of the summer/fall scholarship I want to make sure that RMOUG knows how grateful I am for their assistance/support because the scholarship I received helped me cover my tuition for this semester and previous semesters... I hope one day that I will find myself in a position where I can help out someone as much as RMOUG has helped out me. Again Thank you for your support over the years, I am truly grateful.

Sincerely, Mike Herder

6 SQL>UPDATE • Summer 2012

Leverage Oracle AQ To Design Event-Driven Applications

by Mike Garcia

Ever find yourself in a situation where you’ve got a require-ment to integrate application components using an event-driven architecture across a network? Assume one application has two components running on separate machines and each component needs to trigger events in the other and vice-versa. If the Oracle database is in the technology mix then Oracle Streams AQ can be used to facilitate communications between these remotely located application components. This can be achieved by sending messages between the application components that use both message pub-lishers and subscribers.

Let’s first define what an event-driven architecture is. It is a software architectural pattern applied by the design of applications that transmit events among loosely coupled software components. Those events can be binary, XML, JSON or any other data type required to allow components to function coherently. An event-driven system con-sists of event producers and consumers. Consumers are responsible for reacting to an event when it is presented. Producers create events based on defined rules, logic or a schedule. Loosely coupled software components are ones that have little or no knowledge of the definitions of other, separate components. This is generally a desirable design because the less a com-ponent has to know about some other component the less complexity there is and they are easier to test. A well known example of an event-driven architecture is the Java Swing API.

The Oracle database has included messaging technology for some time. Known as Oracle Streams AQ, it supports modern enterprise design patterns including point-to-point and publish-subscribe architectures. Point-to-point is where one client sends a message to a queue and another client receives messages directly from that same queue. The typical example is a telephone call where two telephones are connected. There are two endpoints at play. The publish-subscribe pattern is when a client sends a message to a queue and that queue has one or more subscribers waiting to receive messages. The client gener-ally has no knowledge of who the subscribers are. In other words, many endpoints receive messages transmitted by one node; like a television broadcast. Oracle Streams AQ offers both styles of mes-saging.

Recently I worked on an application (see Figure 1) where the requirement called for having two primary components (a Swing-based user interface and an engine) work in concert with each. This

was a crucial design objective. Having multiple UIs connected to the engine, only one in write-mode, needed to be supported so that events in the engine could trigger updates to all connected UIs. The role of the engine was to primarily download financial data files and do transactional-loads to a local database along with auditing and other housekeeping tasks. The application components had to oper-ate harmoniously without using any additional TCP ports other than the JDBC database connection port due to customer security requirements impeding our ability to send events over TCP. By harmonious I mean that when an event in either the UI or engine took place, it needed to be handled by the other component with as little overhead as possible and minimal code; preferably by an API

that had potential for re-use. One early idea was to poll an “event” table periodically for

changed data and update the UI from query results. The code to support this though, was deemed to be overly complex and there was skepticism about the performance characteristics of select-polling due to the potential for hundreds of events arriving in short time intervals. So we chose to go with the “push” capabilities afford-ed by Oracle Streams AQ. It uses asynchronous messaging where message sends (enqueue) and receives (dequeue) are non-blocking. The receive calls wait for a message on the queue but the call itself does not block per se when messages are available. New queue or event subscribers; such as a new UI connecting to the engine, can

Figure 1 – The UI interacting with back-end engine (Engine is a daemon process or service)

SQL>UPDATE • Summer 2012 7

be easily added to the list of subscribers. More importantly no poll schedule was needed that otherwise would have wasted resources and added more code bloat.

For our specific requirements we used Java technology and were able to rely on having an Oracle database in install environ-ments. In order to avoid having to write hard-coded SQL or create a custom data access layer we opted to use Hibernate ORM (Object-To-Relational Mapping) technology. This ended up fitting nicely with the event-driven architecture we were aiming for. Hibernate has some really useful event listeners such as PostInsertEventListener and PostUpdateEventListener that we use to trigger events. Check out the Hibernate documentation online for a complete listing of supported event listeners.

The design to create a bi-directional messaging framework was premised on one key component, the SharedTableAdvisor or STA for short. The purpose of the STA was to create a background thread that would subscribe to a queue and await messages. It needed to be a background thread because when no messages were being sent to the queue, the receive call would simply wait for messages. If the receive call were placed in-line with the application logic, the net effect would be to block application code. Not ideal when lots of activity is taking place. In order for the STA to communicate activity to connected components, TableChangeListener objects are registered with the STA for receiving callbacks when DML changes to tables occur. The UI for example, registers its interest in know-ing when different events such as downloading a file, loading data, calculating bytes processed and success/failures are triggered in the engine. The STA, upon receiving a message from the associated queue, calls back to all the interested TableChangeListener objects that perform their own logic – usually to update a UI widget. Both the UI and engine contain STA threads since activity flows both ways; however, they use different queues since it would be detri-mental to have an STA dequeue a message it just enqueued. This would create a race condition and break functionality by not allow-ing events to arrive at the proper destination.

Once the STA receives a message from an Oracle queue, that message is asynchronously dispatched to a MessageHandler object whose job is to do the actual callbacks on all the registered TableChangeListeners. This frees up the STA to immediately go back to the wait-state for new messages. The MessageHandler is implemented as resizable thread-pool so that if the message flow

rate increases substantially, the handlers can use more threads to execute the callbacks. Figure 2 shows the primary STA run method’s code. The STA primary thread works in concert with a MessageReceiver thread. Having a separate thread running the MessageReceiver allows for plugging in of other implementa-tions if required. It also allows for concise error handling and re-establishment of the MessageReceiver to wait for messages. In our case we have two MessageReceiver implementations, Oracle and SQL Server. The SQL Server version is built on top of SQL Server Message Broker. Figure 3 shows the secondary, MessageReceiver thread’s run implementation.

Both the primary and secondary STA threads communicate by sharing of a private lock Object. They use condition variables keep-

Running and waitCondition that are AtomicBooleans. AtomicBoolean allows for lock-free, thread-safe variable read/writes that are required in this type of concurrency. On startup of the STA, the MessageReceiver thread imme-diately acquires the lock and passively waits for messages as shown in Figure 3. The receive() method in our Oracle implementation calls the recv_msg procedure shown in Figure 7. Once the MessageReceiver pops a message off the queue the lock (L1 in Figure 4) transitions back to the STA primary thread and it asynchronously invokes the handleMessages() method in Figure 2. The lock L1 then gets re-acquired by the MessageReceiver. By cycling the lock in this manner the STA is able to pop (or dequeue) messages off the Oracle AQ as rapidly as possible without interference from client code in the TableChangeListener callbacks.

A terminate command can be sent to the STA which will also terminate the associated MessageReceiver thread. Note the msgRcvr.receive() generic call. Because Figure 2 – Primary STA thread run implementation

Figure 3 – Secondary STA thread run implementation

Figure 4 – Lock transition between STA, MessageReceiver, and MessageHandler

8 SQL>UPDATE • Summer 2012

MessageReceiver is an interface. Different implementations can be used to receive messages, for example a TCP receiver. The receive method in Figure 5 shows the code to invoke the PL/SQL procedure that receives messages. One thing to note is that any number of STA threads can be run in an application. They can be bound to the same or different queues depending on the requirement. For pure component integration though, the restriction is they must be bound to a unique queue to avoid race conditions. There is no reason why multiple STA’s cannot be bound to the same queue.

The event “triggering” mentioned earlier for sending messages on table changes is implemented using Hibernate’s event listen-ers. Whenever an insert, update or delete takes place on a table (mapped to Hibernate POJO’s) designated as being “shared-state”. Hibernate events fire and messages are sent as part of the event firing. The message consists of the row information in XML form. Column filtering could have been implemented; however we did not identify a need so we send the entire row. The message sends are only for tables that are designated “shared-state” tables. If another table is required to share state between application components it can easily be added to a properties file. The application will pick this up and sends event messages from DML activity on the new table.

“Shared-state” tables are in contrast to the application resid-ing in the same memory space; where the sharing would have been via shared, stateful objects. Figure 6 is all the code required to call the send message stored procedure. In Figure 7 the send procedure shows the relevant code to enqueue (put) a message to an Oracle AQ queue. Since our messages are XML we use the SYS.XMLType in Oracle to process them. This has several advantages; mainly that there are lots of tools to work with XML in PL/SQL and Java. For larger messages the data could also be compressed however; we chose not to due to the small size of our messages.

In order to send and receive messages we chose the PL/SQL interface to Oracle AQ. Below is the package along with the send/receive procedures we use. The unlisted body procedures in the package are ones we use to handle administrative operations on the queues and subscribers.

In order to show the STA being used in a simple application; consider the following Figure 8. The Swing app incorporates the

STA and pushes data to another application when the update but-ton is clicked. The top left UI and right UI apps are distinct Java processes. Upon editing and selecting the update button in the top left UI, the right bottom UI gets updated after the data was sent across Oracle AQ. Note the window frame shows the unique subscriber ID. The right UI can also affect a change to the left UI. The application registers a TableChangeListener whose callback method updated the UI with the data send across the message queue. Note that these processes need not be on the same machine. As long as the application can connect to an Oracle database, they can interact in this manner.

Figure 5 – Receive method invoking PL/SQL stored procedure

Figure 6 – Send method invoking send PL/SQL procedure

Figure 7 – PL/SQL package and send/receive procedures

Figure 8 – Simple application using the STA

SQL>UPDATE • Summer 2012 9

Being that our project was a Java application from the start, the Oracle JMS API was considered instead of the PL/SQL inter-face but, due to the lack of familiarity with the API and support uncertainty we stuck with PL/SQL. Also having the procedural interface via JDBC avoided having the build-compile-run overhead associated with having everything written in Java. Changes can easily be made to the PL/SQL package without impacting compiled code which is a nice advantage of using PL/SQL.

One difficulty that I recall was using the SQLXML type in Java. We had to find the correct Oracle implementation jars for SQLXML and their Oracle XML parser. Not a major hurdle but an annoyance nonetheless since we already had been using another XML parser.

Other options were considered; however, due to the port restriction we were very limited. The implementation we have in place now allows for processing hundreds of events a second. In ret-rospect had we gone with another approach it probably would have been more complex, harder to support and difficult to maintain.

A key thing to take away from this article is that using Java is not required. Any language environment that supports threading whether it is C++, Perl, Python or C#; can leverage Oracle Streams AQ in order to propagate events and realize an event-driven archi-tecture.

It should be clear that the code to support an event-driven architecture over Oracle AQ is reasonable. The most complex part of supporting it is the balancing act that has to be done with the threads to ensure smooth operation. Once that part is complete though, the integration possibilities of this solution are open to the imagination.

Mike Garcia has around 15 years of development experience and has worked as a software developer and consultant with various customers in the US and Europe on large J2SE/JEE projects. He is interested in several areas of technology includ-ing functional programming, business rule systems, and rich internet applications along with finding solutions to complex IT problems. He now works with McGraw-Hill Financial creating high performance applications and data warehouse solutions. He recently competed in his first Spartan race and looks forward to doing it again - “You’ll know at the finish line!”

Present Our DBLabsBack in December of 2009, RMOUG and Regis

University teamed up to start offering hands-on lab ses-sions on various topics. These events are open to the public (you do not have to be an RMOUG member), and they are a great way to learn new skills, get some hands-on experience in a lab environment, and network with other users and members! The meetings happen in one of the computer labs on the Regis campus in Denver, so each person gets their own workstation where they can do the labs. They are typically held on a weeknight evening, with food and drinks provided, so you can just show up, have some dinner, learn some new skills and meet new people!

Past topics have included OEM, Oracle on Linux, RMAN, Oracle database security, and Oracle automatic SQL tuning features. And we’ve had some well-known RMOUG “masters” as presenters, including Darl Kuhn and Tim Gorman. Future topics being planned include MySQL, OEM11g and more. And we are always open to new presenters and topics. So if you’d be interested in running a future lab session, feel free to contact

Brad Blake at [email protected].

To stay up-to-date and informed on upcoming ses-sions, be sure to join the group here:

http://www.meetup.com/RMOUGLabs/

Also, be sure to “friend” us on Facebook to get updates on the labs, as well as other RMOUG events:

http://www.facebook.com/RMOUG

You can also follow us on Twitter @RMOUG_ORG

10 SQL>UPDATE • Summer 2012

Server and operating system• Someone (usually not the application developer) will be in charge of ensuring that access to the operating system and hardware for database servers and application servers is locked down except for services required to run the application.Data access • Your application may need to restrict application or enterprise roles to working with subsets of data. The mechanism to filter data by user or group is a function of the application but embedded Oracle database features like Virtual Private Database (VPD, also called “Fine Grained Access Control”) are a great way to solve data filtering needs.Application security • Features and data components (fields, for example) within your application will be allowed or restricted to certain user groups. Setting this up and implementing it is an application developer function. This is the area that is relevant to this white paper and its hands-on practice.

In addition, you will also want to study any unfamiliar secu-rity topic such SQL injection, cross-site scripting, denial-of-service, and URL hacking, so that you can construct test plans that include security attacks of different types.

Application Security Application security is an important component of system

design. You need to plan carefully which users should have access to particular application functions that manage specific data sets. Coding and testing this access is part of a complete system develop-ment effort. Most applications need to ensure the following security aspects:

Only approved users can access the application.•Users can access only the data they are allowed to access.•Users cannot perform actions not designed in the system •(for example, accessing server files).

Web applications, especially those with Internet audiences, need a publicly-available web server that responds to HTTP requests. The application code and file privileges on this server can be limited to read-only. However, web applications handle the data-base connection automatically in the Model (database) layer code. As with Oracle’s Fusion and E-Business Suite applications, a single

Oracle WLS Application SecurityImplementing the Superstition in JDeveloper

by Peter Koletzke, QuoveraAnd Duncan Mills, Oracle

Data is the lifeblood of an organization. Decisions are made; customers and clients are served; and careers are advanced because of data collected and available to online systems. Securing data and online systems from unauthorized access is a necessary require-ment in modern IT projects. Therefore, despite the implementation complexities and Helen Keller to the contrary, we must apply due diligence for implementing the superstition of Absolute Security.

The Oracle WebLogic Server (WLS) offers a standard Java Enterprise Edition runtime service that executes web application code written in technologies such as Java servlets, JavaServer Pages, and JavaServer Faces. Oracle Platform Security Services (OPSS) provides standard Java Platform, Enterprise Edition (Java EE) security features to WLS so that you can ensure the safety of your application systems and data.

This white paper focuses on the general principles of applica-tion security. It briefly examines the security needs of a Java EE web application. It then provides an overview of how the OPSS security features fit into the picture of web application at runtime. The main objective of these brief introductions is to get you started thinking about security needs for your applications and the features of WLS that can fulfill those needs.

We highly-recommend that you follow up studying this white paper with a session where you use JDeveloper 11g to run the hands-on practice available online (www.quovera.com/whitepa-pers). This hands-on practice demonstrates techniques you can use in JDeveloper to hook your application up to the OPSS features in WLS. We think that following this practice within JDeveloper will not only inform you about these techniques but give you valuable experience with the required components. It is a necessary part of the complete picture for applying security in an ADF application.

Security AreasA complete application security strategy consists of working in

a number of areas such as the following:

Security is mostly a superstition. It does not exist in nature,

nor do the children of men as a whole experience it. Avoiding danger is no safer in the long run

than outright exposure. Life is either a daring adventure or nothing.

—Helen Keller (1880–1968)

SQL>UPDATE • Summer 2012 11

database user account is used for all users accessing the database. This means that application user accounts must be established outside of the database.

The WebLogic server provides authentication features that require users to log into an application session based on an account setup in a user repository such as a Lightweight Directory Access Protocol (LDAP) repository: Oracle Internet Directory (OID), for example. (Refer to the Spring 2012 issue of RMOUG SQL>Update for details about LDAP.) The user accounts on the application serv-er act as user accounts for a specific application. Application logic then manages specific application privileges to these users.

Authentication and AuthorizationThe normal way to satisfy the requirement of allowing applica-

tion access to only approved users is with a login screen where users enter credentials such as a name and password. Logging into a Java EE application accomplishes two objectives—both of which are pro-vided by the runtime process—authentication and authorization.

AuthenticationThe security service validates the user’s credentials based

upon a user name and password, or potentially a token based mechanism such as a Secure Socket Layer (SSL) certificate or bio-metric device such as a finger print scanner. The user name and password approach is the most familiar and common implementa-tion of authentication. The user’s name, password, and a definition of the groups to which they belong are stored in a user repository (also called an identity store or credentials store).

The OID component of Oracle WebLogic Server provides LDAP services but you can alternatively use other LDAP systems. As with its predecessor (Oracle Application Server), WLS supports Single Sign-On (SSO)—a facility for passing user login information between applications. It is intended as a provider for an enterprise-level, production environment. The sidebar “Testing Application Security in JDeveloper” describes how the enterprise user reposi-tory can be emulated for a local WLS runtime.

AuthorizationAfter passing the authentication stage to verify the user, the

security service provides access to information about the user to the application. This information may take the form of a list of groups to which the user belongs in the user repository. These roles are then mapped to the application (logical) roles within the application. The application roles are used in the definition of rules that allow access for parts of the application. In a Java EE application, the rules and group-to-role mappings are stored in a configuration file. The user, who logged in during the authentication stage, is given access to application functions based on application roles.

In addition, when needed, the application can read the logged-in user’s name and role and hide or disable restricted parts of the application appropriately.

Java Authentication and Authorization ServicesIt is best to rely on prebuilt frameworks when faced with

the task of implementing security for an application. This not only saves you work, but ensures that you are not missing any features required when securing a system. Fortunately, the Java Development Kit (JDK) offers a framework for this purpose: Java Authentication and Authorization Services (JAAS). JAAS offers functionality that you can use by calling its APIs to verify user logins and restrict access to resources. This library also provides an industry standard method for authentication and authorization. The JAAS features are available to application client (desktop) applications as well as web client applications.

Oracle Platform Security ServicesThe Oracle Plaform Security Services (OPSS) of the WebLogic

Server is responsible for providing hooks from the application to JAAS facilities. According to an FAQ for OPSS, the services rendered include: “Security (authentication, authorization, SSO, credential store management, key store management); Audit; Cryptography (encryption and signature); Certificate lookup and validation; User roles; Credential mapping; Role mapping; Java EE policy and role deployment; Java2 and JAAS Policy Provider.” (currently at www.oracle.com/technetwork/testcontent/opss-faq-131489.pdf)

ADF SecurityOracle Application Development Framework (ADF) is an

architecture supplied by JDeveloper, which offers the developer consistent methods for working with various ADF and non-ADF frameworks (code libraries) like Enterprise JavaBeans (EJBs), ADF Business Components, and ADF Faces. ADF Security is another one of the ADF frameworks. It provides a layer on top of OPSS that makes connecting application components to secure services relatively easy. For example, after setting the application up to use ADF Security, most code that connects pages, page fragments, and elements on the page to security checks is done in a declarative way (the “ADF Way”). The following diagram depicts the relationship between ADF Security, OPSS, and JAAS.

NoteThis white paper and the online hands-on practice describe techniques and features in the most currently available version of JDeveloper 11g—11.1.2. Techniques and features may vary somewhat with other versions of JDeveloper 11g.

Testing Application Security in JDeveloperIt is important to work security into the application as early as possible during development so you can test all possible scenarios while completing a certain application functional area. When you test an application in JDeveloper, you run the integrated WebLogic Server locally and you will normally not tap into the enterprise LDAP server. ADF Security allows you to set up test accounts and roles (as demonstrated in the hands-on practice) that you can use to try out access to application func-tions. When the application is deployed to the enterprise server, the test accounts will not be copied but you will need to be sure the enterprise roles you used to test the application are set up on the server.

12 SQL>UPDATE • Summer 2012

The application connects to the database using the applica-9. tion database user account (APPUSER) written into a data source on the application server.

Levels of Security in a Web ApplicationDue to the highly-accessible nature of the World Wide Web,

web applications are potentially more widely available than intra-net or WAN applications. Therefore, an approach that addresses multiple layers is necessary as follows:

Database user• All web application users connect to the database using a single database user account. This application database user account would be different from the application database object owner account. It would be granted access to only the required application objects.Application user account• Just as database grants must be in place so the application user account can access the application owner account’s objects, the application needs to set and interrogate privileges when presenting menu options, pages, or components on a page.Application user data access• Access to pages and com-ponents can provide security at the table level. However, this level may not be sufficient. Your application may also require restriction to specific rows within a table. You can accomplish this by adding WHERE clause components that read the database user or by using table policies (VPD). Data query restrictions• Query-By-Example (Find mode) screens may allow the user the ability to query data in an unintended way using SQL injection. The ADF Business Components model-based query criteria object implements a secure method for user queries that is based on bind variables and is immune to SQL injection.

Groups and RolesA user repository enrolls users in groups

so privileges can be granted by group, not by individual users, who may change job func-tions over time. Groups reflect the job func-tions of users, for example, HR Manager or Warehouse Clerk.

The application defines roles, a set of functional privileges to which application areas are granted. For example, a role called Expense Report Approver might be given the ability to approve or reject expense reports. Other roles would not be able to perform that function. The application serv-er would contain a set of these roles (called “groups” on the application server side) and the application would map the application

The later section “Setting up ADF Security” summarizes the steps required in using this feature in your ADF application.

Directory ServicesIn addition to the security services provided by the JDK

through OPSS to the application, the application server uses direc-tory services software to link between the application server and an established user access control list (user repository) that is external to the application server. Other (and multiple) user repositories can be plugged into this service by use of login modules, which provide a standard access to other types of user lists such as Oracle database accounts.

In Oracle WLS, any LDAP server supplies this directory service. The examples that follow use LDAP services from Oracle Internet Directory, which can tap into the user list in an exist-ing LDAP system (such as the Microsoft Active Directory). LDAP repositories are used to validate network users for file and directory access privileges outside of web (and other) applications. The com-munication path for this strategy is shown in Figure 1.

The process flow for the example of an application server login in this diagram follows:

The user sends an HTTP request using a URL including 1. a context root that identifies a specific application (tuhra in this case). The request determines that ADF security is active in the application.The authentication service determines the type of authen-2. tication and presents a login page.The user enters a name and password and submits the 3. login page.The authentication service requests OID to verify the user 4. and password.OID verifies the password from the LDAP source and indi-5. cates pass or fail to the authentication service.The authentication service accesses the application and 6. places the user name into the HTTP session. During the application session, the application can request 7. the username (“Joe”) or group (role, in this example, “man-ager”) to which the user belongs so it can render the pages accordingly.web.xml activates ADF Security for authorization to spe-8. cific resources like pages and task flows.

NoteThe starting point for documentation about WLS security fea-tures is the Oracle Fusion Middleware security documentation library currently at docs.oracle.com/cd/E21764_01/im.htm (spe-cifically Oracle Fusion Middleware Application Security Guide 11g)

Figure 1. Directory services used for a Java EE web application

SQL>UPDATE • Summer 2012 13

available to the enterprise’s application server.Map the application roles to the enterprise roles so the 4. application can read the user’s privileges directly from the authentication source (such as an LDAP server).Authorize access to pages and task flows within the appli-5. cation. You set up security policies (like database grants) that allow users in certain groups to access a page or task flow.Authorize access to page components such as fields. Using 6. properties or back-end Java code, you can disable (make read-only) or hide fields or links that are restricted to cer-tain user groups.Create a login page to use for your application instead of 7. the less secure HTTP basic authorization (popup) dialog.Access information about the logged-in user so you can 8. display user-relevant content or functionality.

SummaryThis brief introduction to ADF application security has dis-

cussed the basic security services of WLS and how they can be defined to read from LDAP user repositories. It explained the basic steps for login: authorization and authentication. It also outlined how various server and client components interact in the security process when a web application is run as well as some of the ter-minology used when setting up and using security services. With this background along with the techniques demonstrated in the hands-on practice, you will be able to design and program security features into your ADF application so your data and systems can be as safe as possible and, therefore, attempt the elusive goal of Absolute Security.

About the AuthorsDuncan Mills is senior architect for Oracle’s ADF Product Development group.

He has been working with Oracle in a variety of application development and DBA roles since 1988. For the past 16 years he has been working at Oracle in both customer support and product development, spending the last 10 years in product management for the development tools platform. Duncan is a frequent presenter at industry events and has many publications to his credit including coauthorship of the Oracle Press books: Oracle JDeveloper 11g Handbook and Oracle JDeveloper 10g for Forms and PL/SQL Developers.

Peter Koletzke is a technical director and principal instructor for the Enterprise e-Commerce Solutions practice at Quovera, in Mountain View, California, and has 28 years of industry experience. Peter has presented at various Oracle users group conferences more than 300 times and has won awards such as Pinnacle Publishing’s Technical Achievement, Oracle Development Tools Users Group (ODTUG) Editor’s Choice (twice), ODTUG Best Speaker, ECO/SEOUC Oracle Designer Award, ODTUG Volunteer of the Year, and NYOUG Editor’s Choice (three times). He is an Oracle Certified Master, Oracle ACE Director, and coauthor (variously with Dr. Paul Dorsey, Avrom Roy-Faderman, and Duncan Mills) of eight Oracle Press development tools books including Oracle JDeveloper 11g Handbook (from which some of the material in this white paper is taken).

server’s role-groups to the application’s functional groups. The enrollment of a user in a role-group would be maintained on the application server’s directory services (user repository), but the mapping of the user repository roles to application roles would be part of the application.

Security PolicyIn addition to mapping application server groups to application

roles, the application also defines security policies, definitions for privileges. A security policy consists of principles—the role or roles who will be allowed to perform the function, resources—the applica-tion functions to which the policy allows access, and permissions—the actions that can be performed by the users in the granted roles. These three components of a security policy are very similar to the definition elements in a database grant as shown in the following table:

Setting up ADF SecurityADF Security allows your application to leverage the WLS

security services. The easiest way to explain the steps for setting up ADF Security is with a demonstration. As mentioned, this demon-stration is available at www.quovera.com/whitepapers; look for the white paper with the same name as this article under the “Rocky Mountain Oracle Users Group - February 2012” heading. The last section of the white paper consists of a hands-on practice that steps through the techniques you will use when applying ADF Security policies to an ADF application. A summary of these steps (some of which are optional) follows as a preview:

Enable ADF Security on the application so the entire appli-1. cation is under the control of ADF Security. All requests for objects and pages will then pass through the ADF Security filters, which authorize or deny access.Define application roles for which you will create security 2. policies for functions and data in the application. These roles will be deployed as part of the application.Define enterprise (credential store) roles and test users 3. that will assist in testing the application before deploy-ment. These roles and users will not be deployed with the final production application because they will already be

Security Policy Component

Database Grant Element

Principal Database user or role granted the privilege.

Resource Database object such as a table, view, PL/SQL program unit to which the grant applies.

Permission The operation allowed to the table, view, or PL/SQL, for example, INSERT or EXECUTE.

We will bankrupt ourselves in the vain search for absolute security.

—Dwight David Eisenhower, (1890–1969)

14 SQL>UPDATE • Summer 2012

Why Do We Build Dimensional Models?In a standard data warehouse implementation, some por-

tion of our data model will be dimensional in nature; a star schema with facts and dimensions. This is true for the Corporate Information Factory methodology espoused by Bill Inmon1, or the Data Warehouse Bus Architecture described by Ralph Kimball2. I’d like to pose a question that takes us headfirst into the discus-sion of what I call Extreme BI: Agile Data Warehousing with the Oracle Exadata Database Machine (Exadata), and Oracle Business Intelligence Enterprise Edition (OBIEE):

Why do we build dimensional models?The first reason is model simplicity. We want to model our

reporting structures in a way that makes sense to the business user, and dimensional models are typically the way the business user sees the business: simple, inclusive structures for each entity. The standard OLTP data model that takes two of the four walls in the conference room to display will never make sense to your average business user. At the end of a logical modeling exercise, the end-user should have a look at a completed dimensional model and say: “Yep, that’s our business all right.” The second reason we build dimensional models is for performance. Denormalizing highly complex transactional models into simplified star schemas gener-ally produces tremendous performance gains. So now, let me ask a follow-up question:

Can Extreme BI change the way BI and data warehousing projects are delivered?

The performance gains that Exadata can deliver for data warehouse systems is typically many orders of magnitude over tra-ditional architectures, and these types of performance gains should never be minimized. For the purposes of this paper, however, I want to focus instead on the transformational power that Exadata, combined with OBIEE, can have on the way data warehouse proj-ects are delivered and managed. Specifically, I’m interested in this combination being the true enabler for Agile methodologies in data warehousing.

1 Bill Inmon et al., Corporate Information Factory, (Wiley Publishing, Inc., 2001, Second Edition)2 Ralph Kimball et al., The Data Warehouse Lifecycle Toolkit, (Wiley Publishing, Inc., 2008, Second Edition)

Agile Data Warehousing With Exadata and OBIEE

The Case for Extreme BI

by Stewart Bryson, Riggman Mead

Waterfall, or “Let’s talk about requirements… again”

To start with, I’d like to paint a picture of what the typical waterfall data warehousing project looks like. The tasks we usually have to complete, in order, are the following:

Interview usersConstruct requirement documentsCreate logical data modelSQL prototyping of source transactional modelsDocument source-to-target mappingsETL developmentFront-end development (analyses and dashboards)Performance tuning

In traditional data warehouse projects, all of these steps are required before end users can see the fruits of our labor. To mitigate this scenario, organizations attempt what they consider “Agile” delivery, but in my experience, this is a simple repackaging of the same waterfall project plan into “iterations” or “sprints,” so that the project can be delivered iteratively. So the process might look like the following:

Iteration 1: Interviews and user requirementsIteration 2: Logical modelingIteration 3: ETL DevelopmentIteration 4: Front-end development

Just to be clear: this is not Agile. We would still require four iterations before users get any usable content. It doesn’t matter if we’ve written some complex ETL to load a fact table if the end user doesn’t have a working dashboard to accompany it. To get an understanding of what lies at the heart of Agile development, we need to look no further than the Agile Manifesto, or the history of the Agile Movement.

The Agile Manifesto, or “Skiing in Utah”In 2001, a group of software developers, representing meth-

odologies such as Extreme Programming, Scrum, Adaptive Programming, Pragmatic Programming, and others, convened at the Lodge at Snowbird in Snowbird, Utah, to discuss software

SQL>UPDATE • Summer 2012 15

development, and, of course, to ski. After two days of discussions, the group wrote and signed the Manifesto for Agile Software Development, which is displayed in Figure 1. Although the different Agile methodologies existed before the Utah meeting, this was the first time anyone used the word Agile to describe these methodolo-gies, and the first time that a larger group of pre-Agile development disciplines pulled together to publish driving principles about what they all had in common.

There is a theme that permeates all the Agile methodologies: working software delivered iteratively. It’s not enough to simply deliver the same old waterfall methodology in “sprints” or “itera-tions,” because, at the end of those iterations, we don’t have any working software to empower end users to make better decisions, or perform their jobs more effectively. To apply the Agile Manifesto to data warehouse and business intelligence delivery, it’s the following key elements that are required to deliver with an Agile spirit:

User stories instead of requirements documents: a user * asks for particular content through a narrative process, and includes in that story whatever process they currently use to generate that content.Time-boxed iterations: iterations always have a standard * length, and we choose one or more user stories to complete in that iteration.Rework is part of the game: there aren’t any missed require-* ments… only those that haven’t been addressed yet.

Oracle Next-Generation Reference Data Warehouse Architecture, or

“We need an acronym for that!”Organizations rely on data warehouses to provide three spe-

cific purposes:

Provide data stores for historical data separate from the •source system. This is important because we shouldn’t be at the mercy of the transactional designers about what data should be stored for posterity, and what data is fleeting.Transform our data structures with reporting purposes in •mind, and imbue those structures with business require-ments.Provide quick, targeted data structures for loading and •facilitating purposes 1. and 2. above.

The Oracle Next-Generation Reference Data Warehouse Architecture, depicted in Figure 2, uses three logical layers to facilitate the main functional components of the data warehouse to respond to those purposes. Each of these layers plays a role in maintaining flexibility to support structured and unstructured data

sources for access by BI tools for both strategic and operational reporting. Using multiple layers in a single data warehouse allows the environment to address changing business requirements with-out losing reporting capabilities.

Staging LayerThe staging layer is the “landing pad” for incoming data, and

is typically made up of regular, heap-organized tables that are populated by extraction routines from a range of data sources. This includes incremental change tables for any method of change-data capture (CDC), including custom-CDC routines, Oracle Database CDC or Oracle GoldenGate (See Capturing Change below.) Also in this layer are staging tables used to assist with ETL process-ing, including data reject tables and external tables for querying flat-files.

Foundation LayerWhile a dimensional model excels at information access, it’s

cumbersome at storing data for the long-term, particularly if analy-sis requirements change over time. The foundation layer in the reference architecture acts as the information management layer, storing detail-level data from business systems in query-neutral form that will persist over time. This is our process-neutral layer, which means that we don’t imbue this layer with requirements about what users want to see in reports and how they want to see it. Instead, the foundation layer has one job and one job only: tracking what happened in our source systems in a neutral form. Typically, the foundation layer logical model looks identical to the source systems, except that we have a few additional metadata columns on each record such as commit timestamps and Oracle Database system change numbers (SCN’s). Our foundation layer is generally insert-only, meaning we track all history so that we are insulated from changing user requirements in the near and distant futures. There are other, more complex solutions for modeling the foundation layer when the 3NF from the source system or systems is not sufficient. Data Vault, a modeling solution that attempts to improve upon simple 3NF by separating the business keys and the associations between those business keys from the descriptive ele-ments that describe the entities defined by those keys, is one such solution.3

3 Daniel Linstedt et al., Super Charge Your Data Warehouse: Invaluable

Figure 1. Manifesto for Agile Software Development

Figure 2. The Oracle Next Generation Reference Data Warehouse Architecture

SQL>UPDATE • Summer 2012

Photos From Our MembersWe’ve had so many wonderful photos submitted to us - we would like to share them with you !

Photographer: Glenn GoodrumNorth Fork of the South Platte River in Pine Valley Ranch Park, a Jefferson County park in Pine, CO.

Photographer: Tony GoldenBlossoms in Fraser

Photographer: Jed WalkerThis cutie was born in a nest in a hanging flower pot on his front porch

Photographer: Joe BonchaCamping in the pass

SQL>UPDATE • Summer 2012

Photos From Our MembersWe’ve had so many wonderful photos submitted to us - we would like to share them with you !

Photographer: Peter WenkerAmerican Basin “drainage” in the San Juan range

Photographer: Tony GoldenBlossoms in Fraser

Photographer: Glenn GoodrumBoreas Pass Road between

Como and Breckenridge

Photographer: Jed WalkerThis cutie was born in a nest in a hanging flower pot on his front porch

Photographer: Joe BonchaCamping in the pass

18 SQL>UPDATE • Summer 2012

Access and Performance LayerThe Access and Performance Layer is where we find our

dimensional models. Our main purpose in this layer is to optimize our model to address business requirements, and provide easy and efficient access for particular BI tools. The key benefit is that our data warehouse should now last beyond the immediate require-ments that our users have for a dimensional model. While Ralph Kimball argues that we can gracefully adapt a dimensional model over time to incorporate changes and new data4, in reality this is often difficult to do.

Extreme BI recommends bypassing, either temporarily or per-manently, the inhibitors specific to data warehousing projects that limit our ability to deliver working software quickly. Specifically, this methodology recommends waiting to build and populate a physical access and performance layer until a later phase, if at all. Remember the two reasons we build dimensional models: model simplicity and performance. With Extreme BI, we have tools to counter both of those reasons. We have OBIEE 11g, with a rich metadata layer that presents our underlying data model, even if it is transactional, as a star schema to the end user. This removes our dependency on a simplistic physical model to provide a simplistic logical model to end users. We also have Exadata, which delivers world-class performance against any type of model, and can bridge the performance gap afforded by star schemas. With these tools at our disposal, we can postpone the long process of building dimen-sional models, at least for the first few iterations. This is the only way to get working software in front of the end user in a single iteration, and, as I will argue, this is the best way to collaborate with an end user and deliver the content they are expecting.

Capturing Change, or “GoldenGate’ing Your Data”

An interval will always exist between the occurrence of a measurable event and our ability to process that event as a report-able fact, as demonstrated in Figure 3. To drive the Oracle Next-Generation Reference Architecture, and therefore Extreme BI, our ETL extraction process must identify changes in the source system and replicate them to the foundation layer on the Exadata Database Machine as close to “immediately” as possible. Oracle provides the following two solutions for standard change-data capture:

Oracle Database Change Data Capture (CDC)Oracle GoldenGate

Both of these options are superior to hand-coded SQL for determining changes, or manual detection systems that use trig-gers, because they use the REDO already generated by the sys-tem for fault tolerance. Oracle GoldenGate is the technologically superior choice in pure replication: it’s simple, flexible, powerful and resilient. We have a bit of a tug-of-war at this point between Oracle Database CDC and Oracle GoldenGate. GoldenGate is the stated platform of the future, however, it does not yet have powerful change data capture functionality specific to data warehouses, such as easy subscriptions to raw changed data, or support for multiple subscription groups. You can work around these limitations using the GoldenGate configuration parameter INSERTALLRECORDS and some custom code, but hopefully we will have more transparent features in the future to address these concerns.Data Modeling Rules to Implement Your Data Vault, (CreateSpace, 2011)4 Ralph Kimball et al., The Kimball Group Reader, (Wiley Publishing, Inc., 2010), Kindle Edition, Location 3,843

Extreme Performance, or “Watch your queries scream!”

The reason we need Extreme Performance is to offset the performance gains we usually get from the access and performance layer, which we won’t be building at least in the initial iterations. Although variants of this methodology have been deployed sans Exadata using a powerful Oracle Database RAC instead, there is no substitute for Exadata. Although the hardware on the Database Machine is superb, it’s really the software that is a game-changer. The most extraordinary features include Smart Scan and Storage Indexes, as well as Hybrid Columnar Compression (HCC). For years now, with standard Oracle data warehouses, we’ve pushed the architecture to its limits trying to reduce IO contention at the cost of CPU utilization, using database features such as partitioning, parallel query and basic block compression. Exadata Storage can eliminate the IO boogeyman using combinations of these standard features plus the Exadata-only features mentioned above to elevate the query performance against 3NF schemas on par with tradi-tional star schemas and beyond.

Extreme Metadata, or “The BI Server is very, very smart”

Extreme performance is only half the battle; we also need Extreme Metadata to provide us the proper level of abstraction so that report and dashboard developers still have a simplistic model

to report against. This is what OBIEE 11g brings to the table. Variants of this methodology have also been delivered without OBIEE, using Cognos instead, which has a metadata layer called Framework Manager. As with the “secret sauce” around software-only features in the Exadata Storage and the Exadata Database Machine, the Oracle BI Server has no equal in the metadata department, specifically around Intelligent Request Navigation:

Figure 3: Latency in Reportable Facts

Figure 4: Benefits Multiply: Converting Terabytes to Gigabytes

SQL>UPDATE • Summer 2012 19

The Model-Driven Iteration, or “Do it all in the Admin Tool”

I’ll gradually introduce the different types of generic itera-tions that we engage in, focusing on what I call the “Model-Driven” iteration first. Our first few iterations are always Model-Driven. We begin when a user opens a story requesting new content. Any request for new content requires that all the following elements are including in the story:

A narrative about the data they are looking for, and how they want to see it. We are not looking for require-ments documents here, but we are looking for the user to give a complete picture of what it is that they need.

An indication of how they report on this content cur-rently. In a new data warehouse environment, this would include some sort of report that they are currently run-ning against the source system, and in a perfect world, this would involve the SQL that is used to pull that report.

An indication of features that are “nice-to-haves.” This might include data that isn’t available to them in the current paradigm of the report, or was simply too compli-cated to pull in that paradigm. After an initial inspection of these nice-to-haves and the complexity involved with including them in the story, the project manager may decide to pull these elements out and put them in a sepa-rate user story. This, of course, depends on the Agile methodology used, and the individual implementation of that methodology.

First we assign the user story to an RPD developer, who uses the modeling capabilities in the OBIEE Admin Tool to “discover” the logical dimensional model hidden inside the user story, and develop that logical model inside the Business Model and Mapping (BMM) layer. Unlike a “pure” dimensional modeling exercise where we focus only on user requirements and pay very little attention to source systems, in Model-Driven development, we constantly shift between the source of the data, and how best the user story can be

solved dimensionally. Instead of working directly against the source system though, we are working against the foundation layer in the Oracle Next-Generation Reference Data Warehouse Architecture. We work from a top-down approach, first creating empty facts and dimensions in the BMM, and later mapping them to foundation layer tables in the physical layer through complex joins and judi-cious use of LTS’s.

To take a simple example, we can see how a series of founda-tion layer tables developed in 3NF could be mapped to a logical

February 11-13, 2013

the process that the BI Server goes through in turning logical SQL requests into physical SQL generation. The development of the semantic layer and its representation in the OBIEE Admin Tool is depicted in Figure 5.

Consider, for a moment, the evolution of dimensional modeling in deploying data warehouses. Not so long ago, we had to solve most

data warehousing issues with the physical model because BI tools were simplistic. Generally, there was no abstraction of the physical into the logical, unless you categorize the renaming of columns as abstraction. As these tools evolved, we often found ourselves with a choice: solve some user needs in the logical model, or solve it with the feature set of the BI tool. The use of aggregation in data ware-housing is a perfect example of this evolution. Designing aggregate tables used to be just another part of the logical modeling exercise, and these structures were generally represented in the published data model for the EDW. But now, building aggregates is more of a technical implementation than a logical one, as both the BI Server and the Oracle Database are aggregate aware and can handle the transparent navigation to aggregate tables in lieu of detail tables.

The metadata that OBIEE provides adds two necessary fea-tures for Agile delivery. First, we are able to report against complex

transactional schemas, but still expose those schemas as simplified dimensional models. This allows us to bypass the complex ETL process at least initially so that we can get new subject areas into the users hands in a single iteration. OBIEE’s capability to map multiple Logical Table Sources (LTS’s) for the same logical table also makes the logical abstractions easy to modify — or “remap” — over time. In later iterations, if we find it necessary to embark upon complex ETL processes to complete user stories, we can reflect these changes in the metadata layer without affecting our reports and dashboards, or changing the logical model that report developers are used to interacting with.

Figure 5: The Semantic Layer in the OBIEE Admin Tool

Figure 6: Logically Mapping a Dimension Table

Figure 7: Logically Mapping a Fact Table

20 SQL>UPDATE • Summer 2012

What we have at the end of the iteration is a completely abstracted view of our model: a complex, transactional, 3NF schema presented to our end users as a star schema. We are able to deliver portions of a subject area in this way, which is important for time-boxed iterations. The Extreme Metadata of OBIEE 11g allows us to remove this complexity in a single iteration, but it’s the performance of the Exadata Database Machine that allows us to build real analyses and dashboards and present that content to the general user community.

ETL (Extract, Transform, Load) Iteration, or “Physicalizing”

Our first several iterations will always be Model-Driven as we work with the end user to fine-tune the content he or she wants to see on the OBIEE dashboards. As user stories are opened, com-pleted and validated throughout the project, end users are prioritiz-ing them for the development team to work on. Eventually, there will come a time when an end user opens a story that is difficult to model in the semantic layer. Processes to correct data quality issues are good examples, and despite having the power of Exadata at our disposal, we may find ourselves in a performance hole that even the Database Machine can’t dig us out of. In these situations, we reflect on our overall solution and consider the maxim of Agile methodol-ogy: refactoring, or rework.

For Extreme BI, the main form of refactoring is ETL. The pessimist might say: “Well, now we have to do ETL development, what a waste of time all that RPD modeling was!” It’s easy to see that this is not the case if we think about our users. They have

been running dashboards for some time now with at least a por-tion of the content they need to get their jobs done. As any Agile proponent will tell you: some is better than none. Additionally, the process of doing the Model-Driven iteration puts our data modelers and our ETL developers in a favorable position. We’ve eliminated the exhaustive data modeling process, because we already have our logical model in the BMM, as demonstrated in Figure 10. We also have our source-to-target information documented in the semantic metadata layer. We can see that information using the Admin Tool, depicted in Figure 6 and Figure 7 above, or we can also use the Repository Documentation option to generate some documented source-to-target mappings. When embarking on ETL development, it’s common to do SQL prototyping before starting the actual map-pings to make sure we understand the particulars of granularity. We already have these SQL prototypes from OBIEE’s NQQUERY.LOG file, as depicted in Figure 9. The combination of the source-to-target-mapping and the SQL prototypes provide all the artifacts necessary to get started with ETL development.

dimension table as our Customer dimension. This is demonstrated with the “joiner” configuration in Figure 6. I rearranged the layout from the Admin Tool to provide an “ETL-friendly” view of the map-ping. All the way to the right, we can see the logical, dimensional version of our Customer table, and how it maps back to the source tables. This mapping could be quite complicated, with perhaps dozens of tables. The important thing to keep in mind is that this complexity is hidden from not only the consumer of the reports, but also from the developers. We can generate a similar example of what our Sales fact table would look like by examining Figure 7. Another way of making the same point is to look at the complex, transactional model and compare this to the simplified, dimensional model, which is demonstrated in Figure 8.

Finally, when we view the subject area during development of an analysis, all we see are facts and dimensions. The front-end developer can be blissfully ignorant that he or she is developing against a complex transactional schema, because all that is visible is the abstracted logical model.

When mapping the BMM to complex 3NF schemas, the BI Server is very, very smart, and understands how to do more with less. Using the metadata capabilities of OBIEE is superior to other metadata products, or superior to a “roll-you-own metadata” approach using database views, because of the following:

The generated SQL usually won’t include self-joins, even when tables exists in both the logical fact table, and the logical dimen-sion table. The BI Server will only include tables that are required to facilitate the intelligent request, either because it has columns mapped to the attributes being requested, or because the table is a required reference table to bring disparate tables together. Any tables not required to facilitate the request will be excluded.

Since the entire user story needs to be closed in a single itera-tion, the user who opened the story needs to be able to see the actual content. This means that the development of the analysis (or report) and the dashboard are also required to complete the story. It’s important to get something in front of the end user immediately, but it doesn’t have to be perfect. We should focus on a clear, concise analysis in the first iteration, so it’s easy for the end user to verify that the data is correct. In future iterations, we can deliver high-impact, eye-catching dashboards. Equally important to closing the story is being able to prove that it’s complete. In Agile methodolo-gies, this is usually referred to as the validation step or showcase. Since we have already produced the content, then it’s easy to prove to the user that the story is complete. Suppose, for a moment, that we believed we couldn’t deliver new content in a single iteration. This would imply that we would have an iteration during our project that didn’t include actual end-user content. How would a developer go about validating or showcasing that content to the end-user who opened the story? How would we go about showcasing a completed ETL mapping, for instance, if we haven’t delivered any content to consume it?

Figure 8: Logical versus Physical Model

Figure 9: SQL Prototype Generated by OBIEE

SQL>UPDATE • Summer 2012 21

When using ETL processing to “physicalize” our logical model, we can’t abandon our Agile imperatives: we must still deliver the new content, and corresponding rework, within a single iteration. So whether the end user is opening the user story because the data quality is abysmal, or because the performance is just not good

enough, we must vow to deliver the ETL Iteration time-boxed, in exactly the same manner that we delivered the Model-Driven Iteration. So, if we imagine that our user opens a story about data quality in our Customer and Product dimensions, and we decide that we only have time for those two dimension tables in the cur-rent iteration, does it make sense for us to deliver those items in a vacuum? With Figure 11 depicting the process flow for an entire subject area, can we deliver it piecemeal instead of all at once?

The answer, of course, is that we can. We’ll develop the model and ETL exactly as we would if our goal was to plug the dimensions into a complete, traditional subject area. We use surrogate keys as the primary key for each dimension table, facilitating joining our dimension tables to completed fact tables. We won’t have completed fact tables, at least in this iteration. Instead we have a series of

transaction tables that work together to form the basis of a logical fact table. How can we use a dimension table with a surrogate key to join to our transactional “fact” table that doesn’t yet have these surrogate keys? Figure 12 demonstrates the issue taking as an example our completed Customer dimension.

The answer is: we fake it. Along with surrogate keys, the long-standing best practice of dimension table delivery has been to include the source system natural key, as well as effective dates, in all our dimension tables. These attributes are usually included to facilitate slowly changing dimension (SCD) processing, but we’ll exploit them for our Agile piecemeal approach as well.

We start by creating aliases to our transactional “fact” tables (called POS_TRANS_HYBRID and POS_TRANS_HEADER_HYBRID in Figure 12), because we don’t want to upset the LTS that we are already using for the pure transactional version of the logical fact table. We create a complex join between the customer

source system natural key and transaction date in our hybrid alias, and the natural key and effective dates in the dimension table. We use the effective dates as well to make sure we grab the correct

version of the customer entity in question in situations where we have enabled Type 2 SCD’s (the usual standard) in our dimension table, as demonstrated in Figure 13. This complex logic of using the natural key and effective dates is identical to the logic we would use in what Ralph Kimball calls the “surrogate pipeline”: the ETL processing used to replace natural keys with surrogate keys when loading a proper fact table.5 Using Customer and Sales attributes in an analysis, we can see the actual SQL that’s generated, as dem-onstrated in the first diagram in Figure 14.

We can view this hybrid approach as an intermediate step, but there is also nothing wrong with this as a long-term approach if the users are happy with the content and the performance of their dash-boards. A surrogate key is an easy was of representing the natural key of the table, which is the source system natural key plus the unique effective dates for the entity. A surrogate key makes this relationship much easier to envision, and certainly code using SQL, but when we are insulated from the ugliness of the join with the OBIEE semantic layer and Exadata’s capability of processing that join efficiently, we really shouldn’t care about the exact physical implementation of the model. If our end users ever open a story ask-ing for rework of the fact table, we may consider manifesting that table physically as well. Once complete, we would need to create another LTS for the Customer dimension (using an alias to keep it separate from the table that joins to the transactional tables). This alias would be configured to join directly to the new Sales fact table across the surrogate key exactly how we would expect a traditional data warehouse to be modeled in the BMM. The physical model will

5 Ralph Kimball et al., The Kimball Group Reader, (Wiley Publishing, Inc., 2010), Kindle Edition, Location 12,463

Figure 10: Complete Logical Model

Figure 11: Piecemeal Subject-Area Delivery

Figure 12: Hybrid Join

Figure 13: Hybrid Join Criteria

22 SQL>UPDATE • Summer 2012

look nearly identical to our logical model, but the generated SQL would be less interesting, as demonstrated in the second diagram in Figure 14.

Combined Iteration, or “The user is in complete control!”

Now that I’ve described the Model-Driven and ETL Iterations,

it’s time to discuss what I call the Combined Iteration, which is likely what most of the iterations will look like when the project has achieved some maturity. In Combined Iterations, we work on add-ing new or refactored RPD content alongside new or refactored ETL content in the same iteration. Now the project really makes sense to the end user. We allow the user community — those who are con-suming the content — to dictate to the developers with user stories what they want the developers to work on in the next iteration. The users will constantly open new stories, some asking for new content, and others requesting modifications to existing content. All Agile methodologies put the burden of prioritizing user stories squarely on the shoulders of the user community. Why should IT dictate priorities to the user community? If we have delivered fabu-lous content sourced with the Model-Driven paradigm, and Exadata provides the performance necessary to make this “real” content, then there is no reason for the implementors to dictate to the users the need to manifest that model physically with ETL when they haven’t asked for it. We may find that entire subject areas — or por-tions of subject areas — are consistently ignored as candidates for ETL rework. This may alarm the seasoned data warehouse archi-tect in all of us, but is shouldn’t. We should view this sign as exactly what it is: a clear indication that our process is working.

About the AuthorStewart Bryson, a recipient of the Oracle ACE Award, is Managing Director for

Rittman Mead America, and since 1996 has been designing, building and supporting complex database systems and data warehouses. Stewart has an expert-level under-standing of the Oracle Database and BI stacks, and is experienced leading a project from initial scope to final delivery. Based in Atlanta, GA, Stewart has delivered projects of varying sizes across multiple verticals, including logistics, supply chain, hospitality, municipal government, retail and clickstream.

Stewart is a recognized writer and speaker in the Oracle community, and has presented at major conferences in the US, the UK, and Australia, including Oracle Open World, ODTUG Kaleidoscope, IOUG Collaborate, RMOUG Training Days, and UKOUG Conference and Exhibition.

Figure 14: Comparing Different SQL Output

Quarterly Education Workshops•Special Annual Training Days Rates•Database Labs•List Server Subscriptions•SQL>Update Magazine•

www.rmoug.org/member.htm

Become A Member For These

Outstanding Benefits

Stewart during his bridge climb in Sydney, Australia

SQL>UPDATE • Summer 2012 23

2012 Training Days Was Fantastic !Reserve Your Spot Early for 2013Exhibitors•Sponsors•Presenters

If you have any questions, feel free to contact [email protected] look forward to another successful Training Days in 2013

February 12 & 13, 2013, with University Days on February 11Information & Updates Available On

http://www.rmoug.org

24 SQL>UPDATE • Summer 2012

v

Let me start out by clearing up any misconceptions anyone has

after hearing my accent – I’m Australian! I came to the US over 15 years ago but still retain an Australian accent which is often mistaken for English, South African or New Zealand. Given that most Americans hear many more British accents than the others I am regularly mistaken for being English.

My current job is Product Manager for Oracle Application Express (Oracle APEX). As such I get to travel the world presenting at conferences and user group meet-ings, talking with partners and cus-tomers, and working very closely with the Application Express devel-opment team. My multi-faceted job includes promoting Application Express, acting as the liaison between the Oracle APEX commu-nity and the development team, and working with other Oracle teams such as documentation, curriculum development, etc.

I absolutely love to present and always try and include as many live demonstrations as possible. I very rarely use pre-recorded demos as it is more exciting to do it live, even if it’s riskier of something going wrong. I have had my share of disasters where my laptop crashed or I couldn’t get an environment working, but generally my live demos go very well. That is one of the beauties of Application Express. It is fast and easy to demonstrate.

Rarely if ever do I get nervous doing presentations. My first ever public presen-tation was in South Korea and I got zero, nada, ziltch reaction from the audience throughout the whole session - not even a single question. A somewhat auspicious start but I blamed it on the fact the transla-tor must have been useless and put it behind me (The fact I was nervous and spoke way too fast had no bearing on the outcome.). Very recently I was in Hyderabad, India for

24 SQL>UPDATE • Summer 2012

RMOUG Member Focusby David Peake

Oracle Java One and Develop and had the pleasure of doing one of the two major key-notes. Standing in front of 1,500 people on a huge raised stage (like the keynote stages at OpenWorld) with bright lights for the cameras, and at least 8 enormous monitors behind me was a little intimidating, but I loved the experience.

I have been in the computer industry for a long time primarily either in Product Management or building custom applica-tions. I first started doing anything with computers back in high school. We had these punch cards that we could write one statement per card. If the cards got out of order or you didn’t use the correct pencil to mark the card exactly in the right box your program failed. Don’t even ask me what language it was as I have no idea. From there I did some FORTRAN and Pascal as part of my Mechanical Engineering and used to tinker with my personal computer.

My first real job was for a bank in Australia doing phone support for any-thing computer related. I then jumped

to another bank as a programmer where they taught me COBOL and Oracle Forms 2.3 running Oracle Database 5 on an IBM mainframe. I thankfully left COBOL, and even worse Pro*COBOL, behind to concen-trate on Oracle Forms and Oracle CASE (the precursor to Oracle Designer). I had a contractor working for me that suggested I

should join Oracle and had a con-tact in the local Melbourne office I could talk to.

Based on that original introduction I joined Oracle in 1993 and have been with them ever since. I spent the first 13 years in Oracle Consulting build-ing custom applications using mainly Oracle Forms and Oracle Designer. After a few years in Australia doing everything from cold calls with sales to conduct-ing training courses and of course consulting for customers, I trans-ferred to a project in Auckland, New Zealand. Unfortunately the project wrapped up after only 18 months and Oracle management asked us to find other opportuni-ties. I was lucky enough to get a lead role on a consulting project in Detroit, MI so I moved to the

USA. It used Oracle Designer to generate out Tuxedo which talked to Oracle Forms 6i. I was on that project for over 5 years which was both good and bad – Very rare within consulting to be on one job for so long and not have to travel. However, by the time the project finished most of Oracle Consulting was moving towards Java development and I had no experience at all in Java.

A true turning point in my career happened when the project in Detroit was winding down. I heard of this consulting project in Chicago, IL that needed PL/SQL developers. With the prospect of either trying to lead a team of Java developers (a truly daunting prospect) or get back to more of a development role using PL/SQL

SQL>UPDATE • Summer 2012 25

v

SQL>UPDATE • Summer 2012 25

RMOUG Member Focus

and some new product I’d never heard of called Project Marvel, the choice was pretty easy. The project turned out to be with the Chicago Police Department which was the Beta site for Project Marvel. When I joined the project most of the Project Marvel development team were just rolling off to work on productizing the tool which was released as HTML DB in 2004. HTML DB was then renamed to Oracle Application Express in 2006.

I spent 4 years at the Chicago Police Department as a part of a team developing all of their front-end systems using what is now called Oracle Application Express. Working at the police department present-ed some unique challenges, such as going into meetings where your customer has a side arm, which took a little getting used to. Patrol Officers are not very patient when it comes to teaching them a new computer-ized system that replaced paper processes and one officer literally threw a PC out of a window – How many times would you have liked to do that?

I took to Oracle Application Express like a duck to water as I was an experi-enced PL/SQL developer and was very familiar with declarative development from all my years developing Oracle Forms. I also trained up over 20 Oracle Forms and/or PL/SQL developers in the tool and all of them were fully productive in weeks and also loved the tool. So almost 6 years ago I decided I needed a change from Oracle Consulting and contacted Mike Hichwa who was the VP responsible for Application Express and also one of the original authors of the tool. He said he had been looking for some time for a new Product Manager to which I replied “Of course I would like the job … What is a Product Manager?” That initial conversation led to me becoming a Product Manager.

So what is it like being a Product Manager at Oracle? Much like most of our answers these days ‘It depends’. Different product managers throughout Oracle have

varied responsibilities, some are only inter-nal or external facing, and some work with less popular tools, some travel almost 100% of the time. Personally I love my job, I work from home or am travelling (about once a month), get to do a range of different things and promote a tool that is exceedingly popular in the Oracle community. Given that Application Express is a “no-cost” option of the Oracle Database we don’t get a lot of marketing dollars but we do have an exceedingly enthusiastic APEX com-munity.

So why Denver? I moved to the Denver Area in mid-2011 for personal reasons and haven’t looked back. Given that I work from home providing I can get high-quality internet access and have access to a major airport I can live almost anywhere in the

United States. Just coming from the mid-west I truly appreciate how wonderful the weather is in Colorado and don’t miss the months of overcast, chilling cold with a wind that blows right through you. I also don’t miss the Chicago area traffic which is always nasty and congested. I enjoy playing volleyball in the warmer months and I’m looking forward to spending many weekends in the Rocky Mountains camp-ing. Combine that with just over an hour to some of the best ski resorts in the world and I’m sold.

Just like the Colorado weather, I’ve found RMOUG members to be very warm and inviting. Look forward to seeing you all at one of the wonderful events they put on.

Cheers,David

David and Darl Kuhn enjoying themselves at 2012 Training Days.

26 SQL>UPDATE • Summer 2012

v

demand for electricity exceeds the generat-ing capacity. (Which may be seen as good thing since it implies economic growth.)

Most power comes from hydroelectric dams, but the water filling the lakes behind the dams is dependent on rain. When it does not rain, or between the rainy seasons, there is less water to run the generators so there is less electricity. (Rumor has it that the electric company sells the electricity to Sudan rather than sending it to Addis Ababa...I suppose it is possible, but if the government (in the person of the Prime Minister (same guy)) refuses to let anyone know how many Ethiopian troops were killed or wounded in Somolia, what makes us think they will admit to diverting elec-tricity? )

Adding electric generating capacity is a challenge. For example, there is a new dam in southern Ethiopia that has not come on line yet because there has not been enough water to fill the lake so the genera-tors can be turned on. One plan to make up the short fall involved installing diesel gen-erators at the substations. (Of course, the diesel fuel to run the generators has to be trucked in from somewhere and continues to get more expensive.)

So, two to three days a week the power is off for some or most of the day in parts of the city during the winter and spring. The outages move around the city so, yesterday, the power was out at my house and, today, it is off at the office. Fortunately, we have generators so the computers and the refrig-erator keep working but making electricity with a diesel generator is much less effi-cient and much harder on the environment than a hydroelectric plant.

Data centers actually need and use the battery backup and generator. A data center in the US may have backup power but the equipment usually only gets turned on to test that it still works. Here you have to plan that your generator will actually get used quite a bit and must be maintained.

Needless to say, I have a new apprecia-tion for candles and how much we depend on electricity every day. (In the “this is strange department” I found myself sending e-mails by candle light one night because the cellular modem I was using could still connect to the cellular phone network that was running on battery power.)

The pictures show the MoFED office building and street scenes in Addis Ababa.

Ciao, John

RMOUG Board Focusby John Jennuette

IT Adventures in Africa

One of the the more interesting proj-ects I have worked on during my long career as an application developer, project man-ager, and general all-around Information Technology person was for the Ministry of Finance and Economic Development (MoFED) for the government of Ethiopia.

Some background on the project: As a part of a many-year project by the Harvard University Business School, hundreds of accountants have been trained in modern bookkeeping methods to keep track of gov-ernment income (internal sources (taxes) and external sources (grants and loans)) and expenditures. The training parts of the project have been very successful and have produced tangible results. The project I worked on was to fix and improve the com-puter systems to support the bookkeeping.

Despite the usual image of “Africa” as plains and mountains with tribal villages, I lived and worked in what may be qualified as “urban Africa” in the Ethiopian capital city, Addis Ababa. No grass huts and small villages. (A typical oddity, however, is that I lived near an intersection called “sarbet”, which means “grass house”). I lived in an apartment with all the modern conve-niences I could ask for (when the electricity was on). The availability of electric power becomes an important part of IT projects, even in the big city.

In the US we expect to flip a switch and the light or other device connected to the circuit will turn on and operate until we decide to turn it off. We get very upset if the power goes out during a storm or because a car runs into a power pole. Even local out-ages can often be corrected easily because there is a continent-wide electric grid that can move power from places where it is gen-erated to places where it is needed.

In Ethiopia, the electric company is a government-owned and -run monopoly. (In fact, the Prime Minister has announced that strong government control of the econ-omy is an advantage for Ethiopia compared to a free-market environment). But the

26 SQL>UPDATE • Summer 2012

SQL>UPDATE • Summer 2012 27

RMOUG Board Focus

SQL>UPDATE • Summer 2012 27

28 SQL>UPDATE • Summer 2012

Reach A Targeted Oracle Audience Advertise Now!

A full page, full color ad in RMOUG SQL>UPDATE costs as little as 70 cents per printed magazine and even less for smaller ads.

SQL>UPDATE is mailed to all RMOUG Members and distributed during Quarterly Education

Workshops and Training Days each year

See Media Kit for deadlines and mechanical requirements.Submit ad materials to: Pat Van Buskirk, RMOUG Newsletter Director

38101 Comanche Creek Road • Kiowa, CO 80117303-621-7772 • E-mail: [email protected] • www.rmoug.org

Business card or 1/8 page $ 501/4 page $ 3501/2 page $ 625Full page $1,000Inside cover $1,250Back cover $1,500

Discounts available for RMOUG Members and Full Year Contracts

RMOUG SQL>Update Advertising Rates

The Magazine for the Rocky Mountain Oracle Users Group • Vol 60 • Fall 2010

Sue HarperSQL Developer Data Modeler

Jed WalkerConfiguring Data Guard for Physical Standby Part II

Cary MillsapMethod R - An Interview

Rocky Mountain Oracle Users Group PO Box 621942Littleton, CO 80162Change Service Requested

Non-ProfitOrganization

U.S. Postage PaidSan Dimas, CAPermit No. 410

Board Focus - Tim GormanMember Focus - Debra Addeo

Dan HotkaOracle Trace Facility

SQL>UPDATE • Summer 2012 29

Meet Your Board

Tim GormanPresident & Vendors DirectorE-mail: [email protected]

Kellyn Pot’vinTraining Days DirectorE-mail: [email protected]

David FitzgeraldSocial Media DirectorE-mail: [email protected]

Kathy RobbBoard Member EmeritusArisant, LLCE-mail: [email protected]

Ron BichSecretary & SIGS DirectorE-mail: [email protected]

Carolyn FrycVice President & Programs DirectorE-mail: [email protected]

RMOUG Board of Directors

Thomas GreenScholarship DirectorE-mail: [email protected]

Art MarshallWeb DirectorE-mail: [email protected]

Heidi KuhnAdministrative AssistantVoice Mail: (303) 948-1786Fax: (303) 933-6603E-mail: [email protected]

Pat Van BuskirkTreasurer & Newsletter DirectorE-mail: [email protected]

30 SQL>UPDATE • Summer 2012

6/5/12 Board June Meeting, Location TBD6/24-28/12 ODTUG Oracle Develope Tools Users Group Kaleidoscope 12 in San Antonio, TX http://www.odtug.com7/15/12 Newsletter Call for Articles & Cover Photo, Fall Issue SQL>Update7/17/12 Board RMOUG: Board of Directors Meeting, Location TBD8/13-14/12 Exhibition Enkitec Extreme Exadata Exhibition, Four Seasons Hotel & Resort, Irving TX8/15/12 Seminar RMOUG: 1-day Speak-Tech seminar with Jonathan Lewis in Broomfield CO8/15/12 Newsletter Deadline for Articles & Cover Photo, Fall Issue SQL>Update8/17/12 QEW RMOUG: Summer Quarterly Educational Workshop (QEW) at Elitch Gardens, Denver CO Board Board Meeting at QEW9/6/12 Symposium UTOUG: Fall Symposium 2012, location TBD9/15/12 Newsletter Mailing, Fall Issue of SQL>Update9/18/12 Board RMOUG Board of Directors Meeting, Location TBD9/30-10/04/12 Open World Oracle Open World 2012, Moscone Convention Center, San Francisco CA10/16/12 Board RMOUG Board of Directtors Meeting, Location TBD11/16/12 QEW RMOUG: Autumn Quarterly Educational Workshop (QEW), location TBD12/03-05/12 Conference UKOUG: Conference 2012, Birmingham UK2/11-13/13 Training Days RMOUG: Training Days 2013, Colorado Convention Center, Denver CO

Please note dates are subject to change. For the most current events calendar visit our website at www.rmoug.org.

RMOUG Events Calendar

Breakfast Discounted Total Cost Ad Rate 1/4 Page $350.00 $175.00 $ 525.001/2 Page $350.00 $312.50 $ 662.50Full Page $350.00 $500.00 $ 850.00Inside Cover $350.00 $625.00 $ 975.00Back Cover $350.00 $750.00 $1,100.00

Contact Carolyn Fryc - Programs Director - 720-221-4432 - [email protected]

Help RMOUG Members and Receive Recognition in An Upcoming Issue of

SQL>Update

SPONSOR A QUARTERLY EDUCATION WORKSHOP AND RECEIVE A HALF

PRICE AD IN SQL>UPDATE

Rare Close-In Luxury Horse PropertyFranktown, CO $944,000

4.5 Acres • Heated Barn • Dressage Arena

Incredible PriceFor These Amenities!

Pat Van Buskirk • Coldwell Banker • Parker Office(303) 243-0737 Cell (303) 841-5263 Office

www.patvanbuskirk.com [email protected]

• 6339 Square Feet• Pikes to Longs Peak Views• Marble & Granite Thruout• Burnished Cherry Cabinetry• Full Wing Master Suite• Two Double Sided Fireplaces

SQL>UPDATE • Summer 2012 31

Join us for our next Quarterly Education Workshop in August for festivities at Elitch Gardens in Denver. RMOUG hosts quarterly workshops in May, August and November of each year with the fourth and largest educational event being Training Days in February. Learn about the newest technologies, gain more insight into Oracle techniques and enjoy the camaraderie of meeting with other Oracle professionals.

If you or your organization are interested in partnering with RMOUG to host an upcoming meeting, or to submit an abstract for presentation, please contact

Carolyn Fryc, Programs Director at [email protected]

Watch RMOUG’s Web Page for August Training Topics www.rmoug.org

August 17, 2012 • Elitch GardensQuarterly Education Workshops

Reach A Targeted Oracle Audience Advertise Now!

A full page, full color ad in RMOUG SQL>UPDATE costs as little as 70 cents per printed magazine and even less for smaller ads.

Contact [email protected]

Index To AdvertisersdatAvail ................................................................................. 32DBAK .................................................................................... 23MR Trace .............................................................................. 30Quarterly Education Workshop ........................................... 31Quarterly Education Workshop Sponsorship ..................... 30Regis University ................................................................... 2Real Estate Coldwell Banker ............................................... 30RMOUG Membership .......................................................... 22Training Days 2013 .............................................................. 23

32 SQL>UPDATE • Summer 2012