What is the best approach to handle this variablity? I have looked at the DYNAMIC_SAMPLING and DYNAMIC_SAMPLING_EST_CDN hints which look promising. In practice we can expect either 10 or 1000 rows in gtt. My concern is that a plan that is optimal for 1 row, turns out to be not so flash for 1000. We currently have inserted stats on the gtt to tell the CBO that it contains 0 rows, so the CBO will choose a plan based on this information, and thereafter all queries will use this plan. My third concern is to do with the variability in the number of rows that can be inserted into gtt, anything from 1 to 5000. Can (should) a fk to master be defined for the column on the gtt?ģ. gtt.master_id does not have a foreign key relationship defined to master. gtt.master_id is not declared unique, whereas master.master_id is unique, should it be?Ģ. In addition to this we have a stored procedure that performs a query (for reporting purposes) on the detail table using a query like (but much more complex than) this:Īt the moment I'm not sure that our schema and the query above is as good as it can be. We have a master table containing a few thousand rows, and a detail table containing tens of millions of rows with a foreign key to the master table, and a GTT with only one column designed to receive inserts of master table primary key values. I think the discussion here about using GTT's as an 'interface' table matches our usage. I didn't have to try to out think the optimizer, I kept the nice plan it gave me and instead of struggling with tuning the query I used a temp table. It works, it's faster, the annoying predicate is used on the small data set in the temp table. WHERE NVL(sal_u,0) - DECODE(:p_sales, 1,0,NVL(smk_u,0)) 0īy moving the problem predicate to the second query, I can execute the first query using the efficient execution plan. Tran_pd, cal_date, sku0, store, count(*) num_rows, SUM(sal_u) sal_u, SUM(smk_u) smk_u, SUM(sal_r) sal_r, WHERE tran_pd BETWEEN :p_start_pd and :p_end_pd Tran_pd, cal_date,sku0, store, sal_u,smk_u, sal_r The fairly easy, fairly fast workaround.Īlter session set optimizer_goal = CHOOSE Where skudaily is a well-indexed, mature union view with millions of rows. WHERE tran_pd BETWEEN :p_start_pd and :p_end_pd AND store = :storeĪND NVL(sal_u,0) - DECODE(:p_sales, 1,0,NVL(smk_u,0)) 0 Pricing_bp.get_perm_price(store,cal_date,sku0) perm_price Pricing_bp.get_temp_price(store, cal_date,sku0) temp_price, Tran_pd, cal_date,sku0, store, sal_u,smk_u, sal_r, Tran_pd, cal_date, sku0, store, SUM(sal_u) sal_u, SUM(smk_u) smk_u, SUM(sal_r) sal_r, temp_price, perm_price Original query - if you remove the DECODE(NVL predicate it runs quickly, with that predicate it is just too slow. To me it is quicker to divide and conquer - I am considering using a temp table as a workaround as demonstrated below. The original plan used hash joins, new plan uses hash joins then nested loops, performance is unacceptable. Start with a query that performs well, add a predicate ( the one below that includes smk_u ), watch the perfectly good execution plan change slightly, performance go out the window. I'm probably asking for trouble, since there are so many ways to approach this problem, but here is a case where a two step approach involving an intermediate table works more quickly than a single query. You were never forced to use a gtt, never. That joins I to J by hts_no (which is what you do) and keeps only the record with the effective_date = max( effective_date ) for rows with that hts_no and country_cd = '02' and language_cd = 'EN'. Where effective_date = max_effective_date Over ( partition by j.hts_chapter_no ) max_effective_date It might be more efficient to simply query: Your existing logic does not "de-dup" any rows from classification_product_master, it takes every row in that table and joins it to the "most recent" row in lcs_hts_mult_desc table (so if there were 15 rows in classification_product_master with hts_no "12345", you would get 15 (or more) rows back. (select hts_chapter_no, max(effective_date) Select i.hts_no, j.hts_chapter_no, i.cl_number, 'Y',Īnd (j.hts_chapter_no,j.effective_date) = That logic is not any different than this query: Is any another way around to avoid using this temp table ? Select hts_chapter_no, max(effective_date) Using this values I'll be passing to the query Select hts_no from classification_product_master In the classification_product_master table i've duplicates values for hts_no ,say Open my_ref_cur for select * from class_test Where (hts_chapter_no,effective_date) = (select hts_chapter_no, max(effective_date) Look the below Packageįor i in (select cl_number,hts_no,part_id,product_desc,source_country I've forced to use a global temporary table to hold data.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |