Page History
...
The last two points need some elaboration:
EIS (Executive Information Systems)
Everybody has seen the demonstration. The executive clicks on an icon and 2 seconds later a pan-dimensional chart of product sales, grouped by category, is shown, year to date, for the last 25 years. The graph is then used at the next board meeting and the executive receives much acclaim.
While every MIS department would like to provide this type of facility to their executives, without understanding and planning for some of the basic EIS requirements it will remain exactly what it is, a fairy story.
Unless you have planned, architected and designed EIS information into your database you will not be able to create results like this within reasonable time frames.
In other words, the previously mentioned EIS scenario may well be possible, but if you keep 300,000 rows of product line item information it may well take 90 minutes to extract and summarize the information required for the pan-dimensional graph. And, even worse, if the executive does not know the significance of the "Record Deleted (Y/N)" column in the database table, all he/she may receive at the next board meeting is severe embarrassment.
Some things you may care to think about:
- EIS systems need to be planned and carefully designed.
- EIS means summaries.
- Summaries mean reading lots of rows of data.
- Reading lots of rows takes time and consumes a lot of CPU power.
- Most useful and rapid EIS systems keep "pre-summarized" data.
- The end users actually query the summary data.
- The summary data may be updated dynamically or cyclically.
- Designing and maintaining summary data means extra design and programming effort independent of any generic tool being used.
Data throughput
This is really an extension of the previous point.
Don't go into client/server with unrealistic expectations.
You may design an application that "sucks up" 6,000 table rows into a spread sheet on your PC and then summarizes and sorts them.
However, you should then ask yourself some questions like these:
...
- Many of the old DP rules about benchmarking and load testing, etc. seem to have been lost in the scramble for client/server applications.
- The old rules still do apply. If you ignore them until your application is complete then you are taking a large risk.
- Building client/server applications and using client/server tools may place a great deal more load on your LAN and twinax communication subsystems. Can they handle this increased load?
To Summarize:
- Spend a lot of time in design
- Use virtual fields to do dynamic "joins"
- Use triggers
- Put validation rules into the Repository and I/O Modules
- Distribute load/logic across the Client and Server
- Avoid pushing too much data through communication subsystems
- Do realistic tests/benchmarks before implementation
- Note that the amount of data transferred includes not just data from the database on the Server to the Client program but also from the Client program to the UIM (User Interface Manager).
This is very important and often overlooked.