Pending Credits Enhancement / Performance Increase


log in

Advanced search

Questions and Answers : Wish list : Pending Credits Enhancement / Performance Increase

Author Message
machx0r
Send message
Joined: 3 Apr 99
Posts: 21
Credit: 151,017
RAC: 0
United States
Message 3163 - Posted: 1 Jul 2004, 14:26:15 UTC

Most of this is based on assumption but it appears as though the current pending.php lookup causes heavy server load. I assume this is because of the large ammount of details provided and the multiple database calls/joins that need to be performed in order to gather it all.

Personally all I'm interested in is a number of pending credits not the details of each unit. Perhaps the Pending on home.php could be modified from a "View" link to a simple number of credits such as the "Total credit" and "Recent average credit".

Seems that lookup would be less resource intensive than the detailed pending currently available. Not only that but if there was a need for the detail page the number displayed on the home.php under Pending Credit could be a link to the details page. Similiar to "Results" link on the show_host_detail.php page.

machx0r
Send message
Joined: 3 Apr 99
Posts: 21
Credit: 151,017
RAC: 0
United States
Message 3278 - Posted: 1 Jul 2004, 22:57:36 UTC

No one thinks this is a good idea? :D

Heffed
Volunteer tester
Send message
Joined: 19 Mar 02
Posts: 1856
Credit: 40,736
RAC: 0
United States
Message 3307 - Posted: 2 Jul 2004, 0:58:40 UTC - in response to Message 3278.

> No one thinks this is a good idea? :D

Sounds good to me.


John McLeod VII
Volunteer developer
Volunteer tester
Avatar
Send message
Joined: 15 Jul 99
Posts: 24677
Credit: 522,659
RAC: 40
United States
Message 3325 - Posted: 2 Jul 2004, 2:26:18 UTC - in response to Message 3307.

> > No one thinks this is a good idea? :D
>
> Sounds good to me.
>
>
I wouldn't mind having the link display the number, or have the number next to the link so that we could look at the number and decide if we wanted to go down the link.

Profile Paul D. Buck
Volunteer tester
Send message
Joined: 19 Jul 00
Posts: 3898
Credit: 1,158,042
RAC: 0
United States
Message 3462 - Posted: 2 Jul 2004, 14:14:57 UTC
Last modified: 2 Jul 2004, 14:26:51 UTC

The trouble with SQL is that your query allows little control in the way that the database gets the answer. In the case of the pending credit numbers is that the SQL (as of a couple days ago) searches against the "results" table.

So what is the difficulty? None, except that the query for a simple list of the pending credit number is working against the busiest and most central table in the system, and most likely it is also the largest.

Add to that the issue of the indexes. There are 7 or 8 of them on the table, which, in theory should speed up the query, except that many indexes slow down all inserts, updates and deletes (if any).

Now an insert in to the table creates 7-9 I/O operations as the indexes all need to be updated each time the data in the table changes.

Layered on top of that is that the select for the pending credit is using a SELECT *; meaning, give me all of the data with regard to the records of interest.

SO, we have the largest, and probably the slowest table being searched for one or two columns of data for the pending credit calculations and the rest of the data in the row is never used. This is especially troubling because the results table at this time has 3 BLOBs as part of the record.

Thats the back ground ...

So, we ask the database for our pending credit...

The database hopefully will use an index to find the records we need, then it pulls in all of the record for the rows of interest, including an extra 3 I/O actions to get the BLOBs (which are not usually stored as part of the base record, but a pointer in the base record points to the location of the BLOB). Now all of this data is sent to the PHP script which then selects on column of data (or was it 2?) and the rest of the data is discarded.

Databases are rarely compute bound, they almost always I/O bound so this is why you watch those things that impact I./O like a hawk. You never, ever, want to do anything that increases I/O on the database.

In Oracle, the optimizer will ignore indexes if the added I/O will increase because of the two step process of reading the index and then getting the data. In those cases, it will do a full table scan instead. Silly as it sounds it actually makes sense.

In a data warehouse having lots of indexes makes sense in that the table data is "static" in that it does not change, mostly we add data at one end and drop it on the other. The results table in out case is in the class of a regular database, or as an Operation Data Store. But, as soon as you think you need more that 2 or 3 indexes to get read performance you should be asking the question if there is not a problem with the design, coding, or data life-cycle.

As I understand it, as soon as a work unit is issued a record is created in the results table indicating that the work unit is "live" and expected back. At this time the BLOBs are not needed as they hold information about the error conditions. For these records, one of the designer choices now is that they can be stored in a 'slimmed-down" table with only the basic data. This might mean that the screen display would show you those Wu in a different screen, but this data is now in a separate table, and has its own screen for display.

Second stage is when we get an ansewer back, at this time we move the record from result-1 to result-2 where we store the data of the response and here the BLOBs do come into play. but this table is also not as large as the final table, when the results get validated they can be moved to table result-3, which is now like a ODS class-3 where the updates are slow and now havving more indexes makes more sense. Over time, these results get moved to result-4 which "retires" the WU from view on thee web site.

Now, that is a data life-cycle that matches up with the needs/desires of the users and separates the data and spreads the load across several tables, decreases locking problems (which I did not address, but if the lock goes from row, to segment, to table level; well it is also an un-good thing, but with out more data on the locking scheme I would not begin to guess as to the size of the problem).

In summary;

1) you never use SELECT * as changes to the underlying table structure can break code if column position is a factor and the table is restructured. The other reason is that you rarely need all of the data from a table, in fact, it is almost never needed. By not using this you reduce I/O from disk to memory, and in this case, it also removes it from the I/O between the database server and the web server, which in most cases I looked at used as few as two columns of data.

2) you never try to cover up performance problems by throwing indexes at it. most of the time the optimizer will not use an index if the I/O overhead exceeds the cost of a full table scan (I cannot say for sure if this is happening in this case or not, you need to watch the actual query execution to know for sure, but I would bet on it, if the index also does not map directly to the conditions of the WHERE clause it won't be used). The problem with an index is that it helps reads (SELECTs) at the cost of making everything else slower, in this case 8 times slower...

3) for the data lifecycle you think about designing the tables to match the actual needs. In the example data-lifecycle I move the data through a series of tables so table lock contention is reduced and the indexing and internal structures are minimixed to improve performance. In a rare case where data is needed from all (or several) of the tables, you an use UNION with dummy date in the "missing" columns. Though, there is no specific case in the web interface that demands that the data could not be segregated where you look at one screen for the WU issued but not completed, another for the WU you have that are pending valiodation and the fromation of the quorum, to the ones that are complete and credit issued. Also, results past, say a month, can be handed off already to the back end and accessable to the participant (I mean how offten do you want to know about a WU you did last summer?).

4) as soon as you see queries that are going against the "largest" table and are calculating statistics you look at doing those as attributes of another table. I forgt the case I saw, but it could have been an attrobute of the participant (user) record. A simple, counter type coulumn where the number is stored there and as results are returned it could incrrement and derement the counter instead of doing calculations on your "busy" table.

No one is going to listen to me this time either, but the SQL I looked at earlier this week, to put it "gently" sucks swamp water ...

Just my opinion.. And we did see these problems on occasion in Beta too ...

{edit}

Sorry about the spelling, but I am not doing well this morning ...

I did not see any joins in the queries I looked at, that does not mean that they are not there... but the WHERE clauses are usually a bigger problem. As one of my examples shows (in the SQL section Glossary) chaning the WHERE to a simpler where and then using UNION/MINUS can give several orders of magnitude impovements in speed.

Almost all of the queries I looked at were basically using the data ase as a flat file. But I only looked at them for aobut 4 hour and spent part of that time reading the My SQL documnetnation to see if there were any surprises there. The biggest thing that I think they are missing is binary indexes that would help in this case. As best I can tell they use the standard B-Tree index (like everyone else)

spelling is getting worse :) time to quit for the day... if any developer wants to talk to me aobut hsi you know my e-mail address

Questions and Answers : Wish list : Pending Credits Enhancement / Performance Increase

Copyright © 2014 University of California