On many occasions, a calendar will do fine for scheduling jobs. However, there are situations that require an immediate action and which cannot wait for the next activation based on a calendar. An example might be of a user who logs on to the database and then, using a logon trigger, more actions are executed.
Another example could be a situation in which we want a backup server to be utilized to the maximum, but not beyond that. We schedule all the backups independent of each other and have each backup raise an event when ready, which tells the system that another backup can go ahead. By letting the backup jobs wait for an event that essentially flags "there is backup capacity available now", we make sure that a backup does not take longer than needed. We also make sure that the backup system is pushed to the highest throughput.
When we just use a preset date and time to start the backups, chances are that more backups are running at the same time (possibly caused by the growth of one or more databases, which is potentially causing their backups to be longer than anticipated). On the other hand, when we make sure that more backups are never ever run at the same time, we will likely have lots of idle time in the backup system.
This is a reason enough to learn how we can make good use of events. However, there are a few things we need to do. It essentially comes down to:
Creating a queue and defining a payload for that queue
Having a process that puts the message on the queue
Coupling one or more job definition(s) to the queue
Again, this gives a kind of control that is hard to find in third-party scheduling packages.
Event messages are placed on an event queue and this is handled by AQ. So we need to call the AQ packages and for that we require
DBMS_AQADM. In the days before Oracle 10g, we needed to set the
AQ_TM_PROCESSES parameter to a non-zero value to work. Since Oracle 10g, this is no longer the case and we can leave the
AQ_TM_PROCESSES value to zero.
First, make sure we can use AQ.
select grantee, privilege, table_name from dba_tab_privs where table_name in ( 'DBMS_AQ', 'DBMS_AQADM') and grantee = 'MARVIN'; /
The expected output is as shown in the following screenshot:
If this query does not show MARVIN having the EXECUTE privileges on both
DBMS_AQADM, we need to give them to our user.
As a DBA, execute the following:
grant execute on dbms_aq to marvin; grant execute on dbms_aqadm to marvin; grant select on dba_aq_agents to marvin; grant create type to marvin; alter user marvin quota unlimited on users; --/ begin dbms_aqadm.grant_system_privilege ('ENQUEUE_ANY', 'marvin', FALSE); dbms_aqadm.grant_system_privilege ('DEQUEUE_ANY', 'marvin', FALSE); dbms_aqadm.grant_system_privilege ('MANAGE_ANY', 'marvin', TRUE); end; /
This makes sure that
marvin has enough privileges to be able to create and use queues. Now connect as
marvin, create an object type that we can use to put a message on the queue, and read from the queue later on.
connect marvin/panic create or replace type bckuP_msgt as object ( msg varchar2(20) ) /
--/ begin dbms_aqadm.create_queue_table ( queue_table => 'bckuP_qt', queue_payload_type => 'bckuP_msgt', multiple_consumers => TRUE ); dbms_aqadm.create_queue ( queue_name => 'bckuP_q', queue_table => 'bckuP_qt' ); dbms_aqadm.start_queue ( queue_name => 'bckuP_q' ) ; end ; /
This creates a queue table called
bckuP_qt, which contains messages defined by
bckuP_msgt. After that,
bckuP_q starts immediately.
The following objects show up in the schema, which are created to support the queue table:
The queue definitions part is ready. Now, we can tie a job to the queue. First, create a job as follows:
--/ BEGIN sys.dbms_scheduler.create_job ( job_name => '"MARVIN"."BCKUP_01"', job_type => 'EXECUTABLE', job_action => '/home/oracle/bin/rman.sh', event_condition => 'tab.user_data.msg=''GO''', queue_spec => '"MARVIN"."BCKUP_Q"', start_date => systimestamp at time zone 'Europe/Amsterdam', job_class => '"LONGER"', comments => 'backup a database', auto_drop => FALSE, number_of_arguments => 1, enable => FALSE ); sys.dbms_scheduler.set_attribute ( name => '"MARVIN"."BCKUP_01"', attribute => 'raise_events', value => dbms_scheduler.job_started + dbms_scheduler.job_succeeded + dbms_scheduler.job_failed + dbms_scheduler.job_broken + dbms_scheduler.job_completed + dbms_scheduler.job_stopped + dbms_scheduler.job_sch_lim_reached + dbms_scheduler.job_disabled + dbms_scheduler.job_chain_stalled ); sys.dbms_scheduler.set_job_argument_value ( job_name => '"MARVIN"."BCKUP_01"', argument_position => 1, argument_value => 'db_01' ); DBMS_SCHEDULER.SET_ATTRIBUTE ( name => '"MARVIN"."BCKUP_01"', attribute => 'destination', value => 'pantzer:15021' ); DBMS_SCHEDULER.SET_ATTRIBUTE ( name => '"MARVIN"."BCKUP_01"', attribute => 'credential_name', value => '"MARVIN"."JOBS_CRED2"' ); END; /
This is just a simple remote external job that calls an RMAN script with an argument for the database to back up. As the backup will take longer than a few seconds, it looks obvious to put it in the
LONGER that we defined a while ago. The queue that is coupled to this job is the queue we defined before. It is
bckuP_q as defined by the
queue_spec parameter. As soon as the
GO message appears in the payload of the queue, all of the jobs that listen to this queue and those waiting for this
GO message will get started. The code listed for the
MARVIN job can also be put together using DB Console. In the following Schedule screen, select Event as Schedule Type:
As the job was not Enabled, it now looks like the following:
So, let's enable the job:
--/ BEGIN sys.dbms_scheduler.enable( '"MARVIN"."BCKUP_01"' ); END; /
This produces the following:
--/ declare my_msgid RAW(16); props dbms_aq.message_properties_t; enqopts dbms_aq.enqueue_options_t; begin sys.dbms_aq.enqueue('marvin.bckuP_q', enqopts, props, marvin.bckuP_msgt('GO'), my_msgid); end; / commit;
The result is that all of the jobs waiting for the
GO message are started at the same time. With the health of the backup system in mind, it would be wiser to query the jobs view, find the backup job that was scheduled first, and give that job its specific
event. In that case, the
BCKUP_01 job will wait for the message BCKUP_01; and
BCKUP_02 will wait for the message "BCKUP_02".
Another option is that Oracle can allow us to define an event that is delivered to exactly one waiting job at a time. An enhancement request has been filed for this. It will make this kind of waiting a bit easier because normal queuing behavior is then saved. This means that things such as job priorities will be honored. When we define a separate event for every job, we have manual control but we cannot influence the selection order of the jobs in the queue, for example, by raising the priority of a job.
When a backup is ready, the backup system can handle the next backup. We can utilize the
enqueue operation by putting our next
GO message into the queue in the epilogue of the backup script. However, what will happen if the script crashes? The next backup will never be released. Again, a smarter location for this code would be in an event handler routine that just waits for termination messages from the Scheduler event queue. As soon as it sees the termination of a backup, it can decide to call in the next one by giving that waiting job a signal at its location.