Feed aggregator

Globally block a specific SQL Plan Directive?

Tom Kyte - 12 hours 39 min ago
I work with a team of DBAs to administer an Oracle-based COTS application suite for over forty client departments within our organization. This past summer we upgraded all client environments from Oracle 12c v1 to Oracle 19c, on entirely new and upgraded virtual hardware platforms. Out largets client department has > 20,000 active users. The COTS software package and its data model are primarily designed for transaction throughput and supporting client user browsing actions. It generates large volumes of dynamic SQL statements in response to user actions. For historical reasons, and to support older versions of Oracle, many of the dynamically generated queries include the /* BIND_AWARE */ directive. This has not caused problems in the past, but since the upgrade to Oracle 19c, these directives frequently cause the SQL optimizer to choose vastly inferior, inefficient execution plans. For example, the Optimizer may choose a FULL INDEX SCAN on a secondary index of a table containing 300 million rows. Executing the same query after removing the /* BIND_AWARE */ directive causes the Optimizer to access the same table using a UNIQUE scan of the Primary Key. This results in an execution time in milliseconds as opposed to several minutes (worst case) for the original dfynamic query that includes the directive. Since these queries are generated many times per second during heavy usage periods, they have caused some serious performance problems, to the point of rendering the systems unusable. The COTS vendor has looked at the problem, along with recommendations we received from Oracle Support, but acknowledge that it will be a large effort to identify and remove those directives for clients running recent releases of Oracle. We have verified to our satisfaction that the directives are the cause of the problem because of the differences in the execution plans when they are present or not. This surprised our DBAs because they understood that the Oracle 19c Optimizer ignored directives, but that is apparently not the case. They have been able to work around the problem for some frequently occurring queries by applying SQL patches to force the Optimizer to ignore all directives, but a patch only applies to a single SQL statement / SQL ID. The problem is the large number of dynamically generated queries containing the /* BIND_AWARE */ directive, which results in an indefinite number of distinct SQL statements / SQL IDs. It's like Heracles fighting the Hydra - cut off one head and two more grow back. Is there any way in Oracle 19c to globally disable a specific SQL Plan Directive like /* BIND_AWARE */ ? We do not want to disable all directives because there are others related to specific admin tasks that should be enabled, but we want to be able to suppress any and all occurrences of /* BIND_AWARE */, at least for specific schemas. So far I have not been able to identify any such feature in 19c. Thanks, Patrick
Categories: DBA Blogs

Invoice Annotation with Sparrow/Python

Andrejus Baranovski - Sun, 2022-12-04 07:52
I explain our Streamlit component for invoice/receipt document annotation and labeling. It can be used either to create new annotations or review and edit existing ones. With this component you can add new annotations directly on top of the document image. Existing annotations can be resized/moved and values/labels assigned. 

This component is part of Sparrow - our open-source solution for data extraction from invoices/receipts with ML.

 

GoldenGate PURGEOLDEXTRACTS

Michael Dinh - Sat, 2022-12-03 16:24

There was an issue where GoldenGate trail files consumed 95% of 197G from filesystem.

PURGEOLDEXTRACTS was changed from MINKEEPHOURS 48 to MINKEEPHOURS 20

The above configuration is relative and with increase transactions can still fill up the filesystem.

Why not size the trail file accordingly and keep n trail files?

https://docs.oracle.com/en/middleware/goldengate/core/19.1/reference/purgeoldextracts-manager.html

PURGEOLDEXTRACTS for Manager
MINKEEPFILES n
Keeps at least n trail files, including the active file. The minimum is 1 and the maximum is 100. The default is 1.

File system will never get full because manager will do the clean up.

This is how an extract trail file can be sized when adding.
ADD EXTTRAIL /u09/dirdat/aa EXTRACT e_hawk, MEGABYTES 500

https://docs.oracle.com/en/cloud/paas/goldengate-cloud/gwuad/changing-size-trail-files.html

ALTER EXTTRAIL trail, EXTRACT group, MEGABYTES n
ALTER RMTTRAIL trail, EXTRACT group, MEGABYTES n

When implementing the above, it would be best if trail files reside on a separate mount from GoldenGate Home.

Would you rather have 20 redo logs at 100MB or 10 at 200MB?

There’s no right or wrong. It’s whether the shoe fits or not.

UKOUG Breakthrough 2022 : Day 2

Tim Hall - Fri, 2022-12-02 14:19

Day 2 started pretty much the same as day 1. I arrived late to avoid the traffic. The first session I went to was Martin Nash with “Oracle Databases in a Multicloud World”. I was a bit late to this session, but from what I saw it seemed the general view was “don’t be stupid, … Continue reading "UKOUG Breakthrough 2022 : Day 2"

The post UKOUG Breakthrough 2022 : Day 2 first appeared on The ORACLE-BASE Blog.UKOUG Breakthrough 2022 : Day 2 was first posted on December 2, 2022 at 9:19 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

UKOUG Breakthrough 2022 : Day 1

Tim Hall - Fri, 2022-12-02 03:26

The evening before the conference the Oracle ACEs met up for some food at a curry place in the city centre. Thanks to the Oracle ACE Program for organising this! Earlier that day I presented my first face to face session in 3 years, and now it was time for my first social event in … Continue reading "UKOUG Breakthrough 2022 : Day 1"

The post UKOUG Breakthrough 2022 : Day 1 first appeared on The ORACLE-BASE Blog.UKOUG Breakthrough 2022 : Day 1 was first posted on December 2, 2022 at 10:26 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

How to change Launch Configuration to Launch Template in AWS

Pakistan's First Oracle Blog - Thu, 2022-12-01 21:59

Here is step by step guide as  how to change launch configuration to launch template in AWS for an autoscaling group. It's actually quite simple and straight forward.

There has been a notification in AWS that was sent out this week to following accounts that make use of Amazon EC2 Auto Scaling launch configurations. Amazon EC2 Launch Configurations will Deprecate support for new Instances. After December 31, 2022 no new Amazon Elastic Compute Cloud (Amazon EC2) instance types will be added to launch configurations. After this date, existing launch configurations will continue to work, but new EC2 instances will only be supported through launch templates.


In order to update the ASG, you need to follow below steps:


1. Create a launch template and paste the user data scripts in it and save it. Also, make sure that you are using the correct AMI ID in it.


2. Once launch template is created then navigate your respective auto scaling group and in the details section of the ASG, click on "edit" button in the launch configuration section.  There you will get an option on the top like "Switch to Launch Template".


3. Then select your newly created launch template and save the changes

Here is the document to create launch template.

Here is the document to know how to replace a launch configuration with a launch template.

The existing instances will keep in running state. Only new instances will be launched using launch template. On the ASG console, you can check the instance is launched using launch template in the instance management section. 

For the instances perspective testing like application is running or not or instance is working properly or not, for this you can login the instance and verify the details. It will not automatically launch an instance in the ASG after setting it to launch template. you would have to change the desired capacity to launch a new instance using the launch template.

Categories: DBA Blogs

tracking blocking sessions

Tom Kyte - Thu, 2022-12-01 04:46
hello sir, I want to see the blocking sessions that occur during the day to fix the sessions or SQLs that are causing the others to lock up. There is no trace of locker SQLs in the GV$ACTIVE_SESSION_HISTORY . The sql_id and top_level_sql_id fields specify the locked SQLs, but not the locker SQLs. How can I get it? thanks for your help.
Categories: DBA Blogs

Default privileges for new objects

Tom Kyte - Thu, 2022-12-01 04:46
Hi, is there a method to set default system privileges for all new created objects in Oracle, such as tables, sequences, procedures and functions and triggers? For example, select privilege assigned to an Oracle user. How to make it possible without having to write additional code except for e.g. create table, sequence, function etc.? Best regards, Dackard.
Categories: DBA Blogs

Need to if any provision for parallelly copying data from main table to different schema table

Tom Kyte - Thu, 2022-12-01 04:46
Hello Team, I would like to know is there any provision for parallelly copying data from main table (one schema) to different schema table without hampering current traffic? We need such provision/approach which should be faster way of doing this. Please suggest or guide us how we can do this? Please note: we want to create new schema which contains same table like in main schema. As soon as SQL operation happens on main schema table same should get copied to different new schema as well in faster way. We need to use this new schema for only reporting purpose so we need to develop this approach. please guide us on this as soon as possible. Thanks & Regards, Shital
Categories: DBA Blogs

PL/SQL Question , how to write a query to accept start and end booking date and display all hotel reservation between that date.

Tom Kyte - Thu, 2022-12-01 04:46
create table hotel_reservation with following fields. Booking id, booking start date booking end date room type, room rent write PL/SQL block to accept start and end booking date and display all hotel reservation between that date. *** In this question i am stuck in : *** create table hotel_reservation(booking_id number(20),booking_start_date date,booking_end_date date,room_type varchar2(50),room_rent number(20)); insert into hotel_reservation values(1, TO_DATE('10-05-2022', 'dd-mm-yyyy'), TO_DATE('12-05-2022', 'dd-mm-yyyy'), 'Double', 12000); insert into hotel_reservation values(2, TO_DATE('21-07-2022', 'dd-mm-yyyy'), TO_DATE('25-07-2022', 'dd-mm-yyyy'), 'Single', 5000); insert into hotel_reservation values(3, TO_DATE('01-02-2022', 'dd-mm-yyyy'), TO_DATE('02-02-2022', 'dd-mm-yyyy'), 'Luxury', 30000); insert into hotel_reservation values(4, TO_DATE('30-06-2022', 'dd-mm-yyyy'), TO_DATE('01-07-2022', 'dd-mm-yyyy'), 'Double', 10000); insert into hotel_reservation values(5, TO_DATE('15-10-2022', 'dd-mm-yyyy'), TO_DATE('15-10-2022', 'dd-mm-yyyy'), 'Quad', 11000); select * from hotel_reservation; DECLARE book_id hotel_reservation.booking_id%type; book_sdate hotel_reservation.booking_start_date%type := TO_DATE('10-05-2022', 'dd-mm-yyyy'); book_edate hotel_reservation.booking_end_date%type := TO_DATE('15-10-2022', 'dd-mm-yyyy'); r_type hotel_reservation.room_type%type; r_rent hotel_reservation.room_rent%type; BEGIN Select booking_id,booking_start_date,booking_end_date,room_type,room_rent INTO book_id, book_sdate, book_edate,r_type,r_rent FROM hotel_reservation WHERE booking_start_date = book_sdate and booking_end_date = book_edate; dbms_output.put_line('hotel_reservation ' || book_id || ' ' || book_sdate || ' ' || book_edate || ' ' || r_type || ' ' || r_rent); END; *** in the begin block of pl/sql ***
Categories: DBA Blogs

Split String with table function

Tom Kyte - Thu, 2022-12-01 04:46
Hi I use this table function for string split (a space is used as a separator) create or replace FUNCTION fmv_space_to_table(p_list IN VARCHAR2) RETURN fmv_test_type AS l_string VARCHAR2(32767) := p_list || ' '; l_comma_index PLS_INTEGER; l_index PLS_INTEGER := 1; l_tab fmv_test_type := fmv_test_type(); BEGIN LOOP l_comma_index := INSTR(l_string, ' ', l_index); EXIT WHEN l_comma_index = 0; l_tab.EXTEND; l_tab(l_tab.COUNT) := TRIM(SUBSTR(l_string, l_index, l_comma_index - l_index ) ); l_index := l_comma_index + 1; END LOOP; RETURN l_tab; END fmv_space_to_table; The table function works fine: select * from table( fmv_space_to_table( 'A2345 123456 7890 2344')) Output: A2345 123456 7890 2344 When table function "fmv_space_to_table" is used in a query with only number values, it works fine e.g. ... s.productnumber IN ( select * from table( fmv_space_to_table( '123456 7890')) ) When table function "fmv_space_to_table" is used in a query with letter + number values, it doesn't work e.g. ... s.productnumber IN ( select * from table( fmv_space_to_table( 'A2345')) ) Error: ORA-00904: "A2345": ungultige ID cause of error: my table function returns as output: A2345 but in this case is needed: 'A2345' Question: How can the table function "fmv_space_to_table" be extended so that it can also be used with letters + numbers Values?
Categories: DBA Blogs

ORA-00001 on merge, concurent transactions

Tom Kyte - Thu, 2022-12-01 04:46
When having two simultaneous merges into a table with a unique constraint in different sessions, the second one throws ORA-00001. Steps to reproduce: <code>create table tab1 (col1 number); create unique index ind1 on tab1(col1); --1st session: merge into tab1 d using ( select 1 col1 from dual ) s on (s.col1 = d.col1) when not matched then insert (col1) values(s.col1); --second session: merge into tab1 d using ( select 1 col1 from dual ) s on (s.col1 = d.col1) when not matched then insert (col1) values(s.col1); --second session now hangs --first session: commit; --second session: --throws ora-00001</code> As far as I know, this might be how Oracle behaves since Merge statment exists and might be considered to work as expected. My objection to this is that merge behaves differently depending on first session being commited or not before the second session starts it's merge. The second session obviously knows it cannot just insert and hangs, waits for the first session to finish. And the second session is only blocked (hangs) if it is working with the same key of unique index. So, again, the second session is obviously aware that there is an uncommited session working on the same record. So when the firts session commits the record to the table (results in insert), the second session already desided that it too should insert and violates the constraint. But why? Shouldn't the second session re-evaluate what the correct actions is? If we tried this with [merge when matched] then the second session hangs until first one commits but even though the constrained columns is updated, there is no issue for the second session to update the same record. Does it make sense or do you think "it works as intended" and shouldn't be addressed?
Categories: DBA Blogs

Birmingham City University (BCU) : The changing role of the DBA

Tim Hall - Thu, 2022-12-01 03:35

Yesterday I took a trip across town to Birmingham City University (BCU) to do a talk to the students. The talk was called “The changing role of the DBA”. It’s been over 3 years since I’ve done a face-to-face presentation. I did some online presentations at the start of lockdown, but it’s been 2 years … Continue reading "Birmingham City University (BCU) : The changing role of the DBA"

The post Birmingham City University (BCU) : The changing role of the DBA first appeared on The ORACLE-BASE Blog.Birmingham City University (BCU) : The changing role of the DBA was first posted on December 1, 2022 at 10:35 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

AWS re:Invent day 3 – Gray failures mitigation

Yann Neuhaus - Wed, 2022-11-30 19:36

So again during the 3rd day of the re:Invent 2022 there were plenty of interesting information to take with us. However, the most interesting part was for sure the workshop about gray failures and how to mitigate them in a multi-AZ environment. Let me try to wrap this up for you…

What are gray failures?

No idea what “gray failures stands” for? Let us try to reminder this situation, we for sure all already experienced, when users say “application ain’t working! Can’t work..” while at the same time on infrastructure side you hear “Nope everything green, no alerts there…..must be the network ;-)”

That’s grey failures. It means a non-binary situation where everything is working and not working at the same time. This kind of distorsion between application / users perception and system one.

It is typically a conjonction of “small” issues that may even not breached any monitoring alerts thresholds, but that taken together makes the application unusable. Some researches even state that the biggest cloud outages are coming from these gray failures as part the so called failure circle.

Using a multi-AZ architecture on AWS, you can indeed try to detect and mitigate these gray failures.

Lets take a example a web based application running on 3 availability zone with load balancer and front-end web servers (as part of an auto scaling group) and a database as backend.

How to detect gray failures?

In case of gray failures our traditional monitoring approach (i.e. CPU > 96%, latency > 60ms, …) isn’t enough. Again the CPU may only be 95%…but the latency may also have reached 59ms…and at the same time we face some unusual amount of retransmitted packets due to a “tired” switch.

Do ou remember about “Murphy Law”? Everything which could go wrong…goes wrong…

On AWS CloudWatch can help you there

Using AWS CloudWatch you will at first create your traditional alerts (CPU, latency, …) but also region availability and latency. Then a second step, CloudWatch will allow you to create so called “composite alarms” which are exactly based on this principle of combining several alarms / metrics using simple AND OR NOT expressions

You can find much more information about detecting such gray failures in the AWS Advanced Multi-AZ Resilience Patterns whitepaper.

How to react?

Now that we have an idea on how we can detect these gray failure, the obvious questions is “how to react when it happens”.

The principle is pretty “easy”: Evict the AZ which is facing issues from our Multi-AZ architecture

This is the same principle than a cluster evicting a failed node.

To do so we can act on 2 levels:

1. Stop sending request to the LB of the affecting Load Balancer by deactivating this “route” in our Route53 DNS configuration

2. Stopping the instances in the faulty AZ and making sure the auto scaling group will bootstrap new ones in the remaining AZ

Disabling the DNS in Route53

Here the global idea. When the alert is raised you going to set a simple value in a DynamoDB table staing that the corresponding AZ in unhealthy.

On Route53 you will have linked each DNS entry to an Health Check

What does this health check, which has been configured priori in Route53?
It will call the Amazon API Gateway to do a REST request to the DynamoDB table to check the healthy state of the corresponding AZ is.

If the REST API comes back stating that the AZ is not healthy anymore (by sending a 4xx code back) then Route53 won’t send this path back until it get healthy again.

You can also have a look on the AWS article Creating Disaster Recovery Mechanisms Using Amazon Route 53.

Updating the auto scaling group

So now that our users aren’t sent to the faulty AZ anymore, the next step is to update the auto scaling group for my web front-end instances. This should automatically shutdown the one in the faulty AZ and start new ones in the remaining AZ.

To do so the idea is again to use a table in DynamoDB in which we will store the subnets of the faulty AZ.

Finally we will simply run a Lambda function to update the auto scaling group to remove those subnets. As a consequence the instances running in them will be shutted down and new ones will be starting in the remaining AZ.

L’article AWS re:Invent day 3 – Gray failures mitigation est apparu en premier sur dbi Blog.

Installing Azure AZ Module on Windows

Yann Neuhaus - Wed, 2022-11-30 13:52

Since Azure (or AzureRM) module is marked as deprecated some years ago, AZ module is the official replacement to manage Azure resources with PowerShell.

This module is running with at least PowerShell 7.0.6 LTS (Long Term Support) or PowerShell 7.1.3, but higher versions are recommended.

When PowerShell from 1.0 to 5.1 are component of Windows operating systems, PowerShell 7 is cross-platform and is at top of .NET Core.
As a result, PowerShell Core has to be deployed on the environment to use this module.

If you try to use AZ module on Windows PowerShell, you will encounter the following error:

Indeed, I have not the recommended versions specified in the documentation:

For my example, I needed to install PowerShell core. To do so, I did an installation with an msi installer with PowerShell TLS release.
This kind of release only contains security fixes and servicing fixes to minimize the impact on Prod environments. They are of course included in Microsoft Lifecycle Policy.

NOTE: PowerShell 7.0 is no more supported, and only at least PowerShell 7.1 is now available.

After the my previous installation, here is the version I get:

This new module allows you to download only the necessary packages you need, which avoids having too much resources loaded in your PowerShell Console.

Keep in mind that executing Install-Module -Name AZ will install ALL packages:

In most cases, you will prefer only installing packages relative to Azure resources you want to manage.

In addition, using Import-Module -Name AZ will load all packages.
Here is an extract of the PowerShell Module Script associated to AZ module:

If you still have scripts running with AzureRM module, you should update them before 29 February 2024.

L’article Installing Azure AZ Module on Windows est apparu en premier sur dbi Blog.

AWS re:Invent 2022 – Day 2

Yann Neuhaus - Wed, 2022-11-30 10:30

The second day usually starts with the keynote from Adam Selipsky, CEO of AWS. The keynote is not a technical session but it’s a good place to learn about new products or services launches. In such big event, it’s show as well. When you enter the room, there is a band playing music and I liked the setlist full of rock music!

Announcements

One part of the keynote was dedicated to data and how to discover, ingest and process it. In that area, he announced Amazon Aurora integration with Amazon Redshift and Amazon Redshift integration with Apache Spark. The idea is to run analytics query on near realtime transactional data. For visualisation of your data, he introduced a new question supported by Q in Amazon Quicksight.

The idea of this blog is not to list all the new launches. You can get a full list of new announcements on a dedicated AWS blog: Top announcements of AWS re:Invent 2022.

Audit and compliance

Today, I didn’t attend any workshop but I was in a builders session with David. These sessions are really appreciated and you need to be fast to save your spot. It looks like a workshop but there are less seats and there is one dedicated speaker for each table. We configured CloudTrail Lake in a demo account. Then we ran few queries to explore the audit data. I invite you to look at David’s blog AWS re:Invent day 2: All about Lakes for more details about the feature itself.

Learning with games

I also wanted to try a new session type for me, gamified learning. I decided to go easy for my first time. Indeed I choose to participate to a session where you take the quests at your own pace without competition.

Once seated, I registered to a platform and the pitch is quite simple: I’m a new employee of the startup Unicorn and I got some quests to solve. You win points by solving those quests. You can work as a team per table and compete with the other tables if you’d like. The image on the left shows few of the available quests. I completed 2 quests of different levels: one very easy (1 out of 5) about Web Resiliency and one more difficult (4 out of 5) about scaling containers in ECS.

The easier quest was well guided and close to the content you can get in a workshop or builder session. There are different tasks with objectives well explained. But the answers are hidden and revealing the hints costs you some of your points earned.

About the “Scaling Fast and Furious” quest, it looks like a scenario. I got only a few lines of text giving the context about the application not reaching the customer expectations. And some vague hints provided by colleagues. But not clear tasks and objectives. I’m happy I finally completed this quest even if I did not had a lot of practical experience with ECS task auto-scaling.

L’article AWS re:Invent 2022 – Day 2 est apparu en premier sur dbi Blog.

In SQL developer compare of two tables

Tom Kyte - Wed, 2022-11-30 10:26
Hi, How to compare the two tables in SQL developer same database which performs the following actions as add, delete, modify Where I can add delete or modify into source table to target table. Can I get example with procedure for the about task please help me out. Thanks Example table A is having old data and table B is having new data so we have compare to tables if any new data comes have to be updated and have to perform add, delete or modify. For this need a procedure with example Please Thanks
Categories: DBA Blogs

Average number of rows over a time period

Tom Kyte - Wed, 2022-11-30 10:26
I find myself needing to generate a data set where I get the average number of rows over a time period. This period needs to be daily, by hour, for the same period for previous days. As an example, get the average number records from 1 am ? 2 am for the last 180 days. In effect count the number of rows on 17 Nov 2022 between 1 am and 2 am, then count the number for rows on 16 Nov between 1 am and 2 am, then count the number of rows on 15 Nov between 1 am and 2 am, etc. Putting the date and time in the WHERE clause is easy enough but is manual. I would like to dynamically create these values for the WHERE clause based on a start date and time passed to the query. Therefore, if the start date and time passed in is 11-17-2022 01:00, an hour is added to get the end hour and then subtract 1 day for the past 180 days to get the average row count for the date range. Is there some function(s) within Oracle that I could use to dynamically create the date and times based on the start date time? Update 23 Nov 2022 Based on some research I've been doing, here is some code I have come up with so far. I thought it best to put this in a procedure. <code> CREATE OR REPLACE PROCEDURE TestAvgATMTransactions ( DaysToLookBack INT, CurrentStartDateTimeInput TIMESTAMP, CurrentEndDateTimeInput TIMESTAMP, PreviousStartDateTimeInput TIMESTAMP, PreviousEndDateTimeInput TIMESTAMP, RTXNTYPCDToCount VARCHAR2(4 BYTE) ) IS BEGIN DECLARE Low_ATM_Tran_Count_PWTH EXCEPTION, TYPE two_cols_rt IS RECORD ( PreviousStartDateTime TIMESTAMP, PreviousEndDateTime TIMESTAMP ); TYPE DateTimesToQuery IS TABLE OF two_cols_rt; StartEndDateTime DateTimesToQuery; PRAGMA EXCEPTION_INIT(Low_ATM_Tran_Count_PWTH, -20001); PWTHCount RTXN.RTXNNBR%TYPE;-- the average transactions during the given period AvgTransactions INT; -- the current average transactions BEGIN -- This does generate the days given how far I want to look back -- however, it does not include the time for some reason -- Need to figure out why as the time is crucial -- (Chris Saxon - https://livesql.oracle.com/apex/livesql/file/content_LIHBDFVF9IUU6AFZB4H6NVLWL.html) SELECT TO_TIMESTAMP (PreviousStartDateTimeInput, 'mm-dd-yyyy hh24:mi') + LEVEL - 1,-- AS StartDateTime, TO_TIMESTAMP (PreviousEndDateTimeInput, 'mm-dd-yyyy hh24:mi') + LEVEL - 1-- AS EndDateTime BULK COLLECT INTO StartEndDateTime FROM DUAL CONNECT BY LEVEL < DaysToLookBack; DBMS_OUTPUT.PUT_LINE (StartEndDateTime.COUNT); END; -- Based on the article Bulk data processing with BULK COLLECT and FORALL in PL/SQL -- by Steven Feuerstein (https://blogs.oracle.com/connect/post/bulk-processing-with-bulk-collect-and-forall) -- FORALL seemes like it would do what I want, but not sure FORALL index IN 1 .. StartEndDateTime.COUNT -- This code needs to get the average for the given RTXNTYPCD for every day -- in my two_cols_rt record -- Yes needs work as this will not give me the overal average of transactions SELECT COUNT(RTXNSOURCECD) INTO PWTHCount FROM RTXN WHERE datelastmaint BETWEEN PreviousStartDateTimeInput AND PreviousEndDateTimeInput AND RTXNSOURCECD = 'ATM' AND RTXNTYPCD RTXNTYPCDToCount; -- end FORALL -- This gets the current count for the given RTXNTYPCD SELECT COUNT(RTXNSOURCECD) INTO AvgTransactions FROM RTXN WHERE datelastmaint BETWEEN CurrentStartDateTimeInput AND CurrentEndDateTimeInput AND RTXNSOURCECD = 'ATM' AND RTXNTYPCD = RTXNTYPCDToCount; -- If the current count for the given RTXNTYPCD -- is less than the average for the period raise the error IF AvgTransactions < PWTHCount THEN RAISE Low_ATM_Tran_Count_PWTH; END IF END; </code> Thank you Michael
Categories: DBA Blogs

Oracle dump into csv file have a empty line at start.

Tom Kyte - Wed, 2022-11-30 10:26
I am executing a sql-plus session to dump a table's data into a .csv file. I need header details along with the data in the dumped csv file. When I use the "set heading on" option, I am getting header details along with the data plus one empty line at the start. I tried multiple options like "set newpage NONE" and others, but first empty file is still coming. Can u pls help to solve this issue? My code snippet is mentioned at below sqlplus -s /nolog << !EOF! WHENEVER OSERROR EXIT 9; WHENEVER SQLERROR EXIT SQL.SQLCODE; connect sys/${DB_PASSWORD}@//${CONN_STR}:${CONN_PORT}/XE as ${DB_USERNAME} --/*************************/-- set echo off set feedback off set linesize 4000 set newpage NONE set pagesize 0 set serveroutput off set termout off set flush off SET NUMWIDTH 5 SET COLSEP "," set markup csv on set sqlprompt '' set trimspool on SET VERIFY OFF SET TERM OFF --/*******************************/-- --/**** Clear Screen & Buffer ****/-- --/*******************************/-- clear screen clear buffer --/*****************************/-- --/**** Start spooling file ****/-- --/*****************************/-- SPOOL $FILE set heading on set pagesize 0 embedded on select * from $table_name; SPOOL OFF EXIT !EOF! Output sh-4.2$ cat aprm_mirrordb_persons_2022_11_16.csv "PERSONID","LASTNAME" 1,"das" 2,"das2" 3,"abc" 4,"def" 5,"testdata only temporary purpose" 6,"testdata only temporary purpose" 7,"testdata only temporary purpose"
Categories: DBA Blogs

Oracle application containers

Tom Kyte - Wed, 2022-11-30 10:26
I have created application root container and two applicatioon PDBs. In application root created a table products ar follows: <code>CREATE TABLE products SHARING=EXTENDED DATA (prod_id NUMBER(4) CONSTRAINT pk_products PRIMARY KEY, prod_name VARCHAR2(15)); ALTER TABLE hr_app_owner.products ENABLE containers_default;</code> If containers_default is enabled on this table, then query (select * from products;) hangs. When containers_default is disabled, query works using CONTAINERS clause (select * from CONTAINERS(products);). Is this is the restriction for SHARING=EXTENDED DATA? I tried to search through documentation without any luck. Thanks in advance.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator