Solving package dependency problem using synonyms

Problem I take part in supporting an ODS system. One of the biggest problem during deployment of revisions is the dependency between packages. Suppose a package A has to be changed and that it is referenced be two packages: B1 and B2 which are used in some database sessions. If the the A package is altered and it contained some global variables then the next execution of the packages B1 or B2 in those sessions triggers “ORA-04061: existing state of package has been invalidated” error. This is because global variables preserve their values during database sessions. Alteration of the package code may change the variables, delete some of them, add others. Therefore, they have to be initialized again. Reseting the global variables during the session could lead to inconsistency. In order to prevent this. Oracle throws the ORA-04061 exception. This mechanism is described in details on the page 

http://oraclequirks.blogspot.com/2007/03/ora-04061-existing-state-of-package-has.html

In addition, alteration of the package body or package specification is not possible as long as it used in a session. The replacing package DDL (CREATE OR REPLACE PACKAGE [BODY]) waits until no session uses the package. This may never happen – if new session starts using package before the previous one stops doing that. In order to change the package – all sessions using it must be killed. It is quite common that a session cannot be killed right now, but is moved into MARKED FOR KILLED state. As a result, the deployment of new package version is postponed for indefinite period of time until the session is really killed. I even created a script which kills all sessions which blocks any session started from my computer by my OS user. The script is really crude but it works and is really useful so I am posting it here.

DECLARE
  CURSOR c_sql_hist(pp_machine VARCHAR2, pp_osuser VARCHAR2) IS
    SELECT ash.blocking_session, ash.blocking_session_status, ash.blocking_session_serial#, s.program, s.module, s.action
      FROM v$active_session_history ash
      JOIN v$session s ON ash.blocking_session = s.sid
                      AND ash.blocking_session_serial# = s.serial#
     WHERE ash.session_id in ( select sid from v$session where machine = pp_machine
     and osuser = pp_osuser )
       AND ash.sample_time > SYSDATE - 2 / 24 / 3600
     ORDER BY sample_time DESC;
  v_r c_sql_hist%ROWTYPE;

  v_my_machine v$session.MACHINE%TYPE;
  v_my_osuser v$session.osuser%TYPE;
BEGIN
  select machine, osuser into v_my_machine, v_my_osuser from v$session
  where sid = ( Select Sid from v$mystat where rownum = 1);

  for i in 1..10000 loop  -- just to prevent infinite execution
    dbms_output.put_line('step: ' || i );
    OPEN c_sql_hist(v_my_machine, v_my_osuser);
    FETCH c_sql_hist
      INTO v_r;
    IF NOT c_sql_hist%NOTFOUND THEN
      dbms_output.put_line(v_r.blocking_session || ' ' || v_r.blocking_session_status || ' ' || v_r.blocking_session_serial# || ' ' || v_r.program || ' ' ||
                           v_r.module || ' ' || v_r.action);
      execute immediate ' ALTER SYSTEM KILL SESSION ''' || v_r.blocking_session || ',' || v_r.blocking_session_serial# || '''';
    else
      exit;
    END IF;
    CLOSE c_sql_hist;
    dbms_lock.sleep(0.5); -- sleep is used to get new blocking sessions
  end loop;
END;

In order to reduce the dependency problems described above, various tool packages, which should be located in one central schema, were copied to all schemas. As a result, they may be changed without killing all (only some) processes. However, this solution is terrible from the point of view of code maintenance.

Solution

The solution to this problem is to reference packages by synonyms. If the package need to be changed – a new version of it is installed with a different name. After it is tested – the synonym is altered to point the new version. The nearest execution of this synonym within a session causes the new package to be executed. Its global variables are initialized, their previous values are lost but this is not a problem in our ODS as they work as a cache and does not hold any state between executions. There is an additional limitation that one procedure or package cannot be executed concurrently is its instances use different object through the same synonym. Suppose that after B1 is executed, the A synonym is changed to point to A_v2. If B1 is executed again (directly or indirectly) then the second execution of B1 will not start before the old execution using the old package is finished. However, this is not really a problem. The solution described above allows to deploy database code efficiently. It is however necessary to remember how this mechanism works and what the limitations are. A naming convention needs also to be developed (for example adding the result of TO_CHAR(sysdate, ‘$MM_DD’), which is dollar sing, month and day of deployment, to the package name). It is also necessary to give other schemas the same grant to new package version as were given to the old package version.

You May Also Like

Sample for lift-ng: Micro-burn 1.0.0 released

During a last few evenings in my free time I've worked on mini-application called micro-burn. The idea of it appear from work with Agile Jira in our commercial project. This is a great tool for agile projects management. It has inline tasks edition, drag & drop board, reports and many more, but it also have a few drawbacks that turn down our team motivation.

Motivation

From time to time our sprints scope is changing. It is not a big deal because we are trying to be agile :-) but Jira's burndowchart in this situation draw a peek. Because in fact that chart shows scope changes not a real burndown. It means, that chart cannot break down an x-axis if we really do more than we were planned – it always stop on at most zero.

Also for better progress monitoring we've started to split our user stories to technical tasks and estimating them. Original burndowchart doesn't show points from technical tasks. I can find motivation of this – user story almost finished isn't finished at all until user can use it. But in the other hand, if we know which tasks is problematic we can do some teamwork to move it on.

So I realize that it is a good opportunity to try some new approaches and tools.

Tools

I've started with lift framework. In the World of Single Page Applications, this framework has more than simple interface for serving REST services. It comes with awesome Comet support. Comet is a replacement for WebSockets that run on all browsers. It supports long polling and transparent fallback to short polling if limit of client connections exceed. In backend you can handle pushes in CometActor. For further reading take a look at Roundtrip promises

But lift framework is also a kind of framework of frameworks. You can handle own abstraction of CometActors and push to client javascript that shorten up your way from server to client. So it was the trigger for author of lift-ng to make a lift with Angular integration that is build on top of lift. It provides AngularActors from which you can emit/broadcast events to scope of controller. NgModelBinders that synchronize your backend model with client scope in a few lines! I've used them to send project state (all sprints and thier details) to client and notify him about scrum board changes. My actor doing all of this hard work looks pretty small:

Lift-ng also provides factories for creating of Angular services. Services could respond with futures that are transformed to Angular promises in-fly. This is all what was need to serve sprint history:

And on the client side - use of service:


In my opinion this two frameworks gives a huge boost in developing of web applications. You have the power of strongly typing with Scala, you can design your domain on Actors and all of this with simplicity of node.js – lack of json trasforming boilerplate and dynamic application reload.

DDD + Event Sourcing

I've also tried a few fresh approaches to DDD. I've organize domain objects in actors. There are SprintActors with encapsulate sprint aggregate root. Task changes are stored as events which are computed as a difference between two boards states. When it should be provided a history of sprint, next board states are computed from initial state and sequence of events. So I realize that the best way to keep this kind of event sourcing approach tested is to make random tests. This is a test doing random changes at board, calculating events and checking if initial state + events is equals to previously created state:



First look

Screenshot of first version:


If you want to look at this closer, check the source code or download ready to run fatjar on github.During a last few evenings in my free time I've worked on mini-application called micro-burn. The idea of it appear from work with Agile Jira in our commercial project. This is a great tool for agile projects management. It has inline tasks edition, drag & drop board, reports and many more, but it also have a few drawbacks that turn down our team motivation.