All rights reserved. Note to U. Contents Notices. Basic administration and configuration techniques. System management: A technical overview. Working with profiles on distributed systems.
|Published (Last):||20 February 2015|
|PDF File Size:||8.20 Mb|
|ePub File Size:||17.4 Mb|
|Price:||Free* [*Free Regsitration Required]|
Any category name. The DataSource defines the location where the Scheduler stores all tasks that are created. See Database configuration for more details. This can be left blank if the database does not require a user ID or password. Table Prefix The Scheduler requires several database tables. It may be desirable to share a single DataSource among several Scheduler resources.
This value will be added to the beginning of the actual table names that the scheduler uses. In addition, if the database requires a schema name for example myschema. Poll Interval Each Scheduler resource has a poll daemon thread.
This value in seconds will instruct the poll daemon thread to wait this long between polls. See Poll daemons for more details. Work Manager The Work Manager is used to control how many tasks can be run simultaneously, and what J2EE context information to propagate from the application that creates the task to the task thread when it runs.
See Work Manager for more details. Poll daemons The Scheduler resource will have one poll daemon thread active for each server for which the Scheduler service is enabled. Therefore, if a Scheduler resource is configured at the node scope level, each server in that node will have a poll daemon running in it. The poll daemon is responsible for loading tasks from the database. The daemon uses the Poll Interval setting on the Scheduler configuration resource to determine the amount of time to wait between database polls.
If the value is 60, then the daemon will attempt to load tasks every 60 seconds for all tasks that are set to fire during that cycle. Figure 3. Poll daemon with second interval View image at full size The Poll Interval setting determines the minimum frequency at which a repeating task can fire. It also determines the amount of tasks that are loaded into memory at one time. Therefore, creating a large poll interval of 24 hours may not be the best choice, even if you only run repeating tasks once per day.
Each task would be loaded into memory consuming resources. Creating a small poll interval of 1 second may seem like the right thing to do. However, this will create additional database contention. A good practice is to choose a value between 5 and 1 hour seconds, taking into account the smallest repeating interval that your tasks will require. Work Manager The Work Manager setting for the Scheduler configuration resource provides the Scheduler with a fixed number of threads to dispatch work on and a policy that determines how to propagate J2EE context information onto a thread.
Figure 4. Default WorkManager configuration panel The WorkManager parameters that apply to the Scheduler are as follows: Parameter Description Alarm Pool The number of threads that can dispatch tasks at one time minus 1.
The Scheduler uses one Alarm thread internally. The rest of the Alarm threads will be used exclusively for dispatching tasks. Figure 3 shows an example where there are two Alarm threads dispatching tasks from a single poll daemon thread. For example, if the "security" context is enabled and WebSphere Application Server global security is enabled, the security credentials that are active on the application that calls the Scheduler.
If "Bob" created the task, then "Bob" will also be running the task. A WorkManager can be shared between more than one Scheduler and used for non-Scheduler purposes. This can be useful if you want to have a single thread pool and alarm pool for multiple applications and services. Keep in mind that although a WorkManager can be configured at the Cell and Node scopes, the thread pool and alarm pool is duplicated in each active server. See Figure 5 for an illustration of how different servers will have a different WorkManager instance but the same amount of available threads and alarms.
Figure 5. Each of these services are only propagated if the service is installed and enabled on the application server used to both create and fire the task. See Task security for more details. Internationalization The Internationalization Context or the java. Locale and java. TimeZone is stored with the Scheduled task when it is created and is reapplied to the thread when the task runs. Work Area All Work Area context information active on the thread is stored with the Scheduled task when created and reapplied to the thread when the task runs.
Application Profile Application Profile tasks are stored with the Scheduled task when created and reapplied to the thread when the task runs. High availability The Scheduler service can be configured such that it is highly available by creating duplicate Scheduler resources or by creating a resource in a cluster. The redundant Scheduler engines will compete for leases and the Scheduler that wins the lease will run the tasks.
If a Scheduler does not obtain the lease, the poll daemon will not attempt to load and run any tasks. A lease is shared among schedulers that use the same JNDI names and database tables. Scheduler resources that are therefore configured at the cluster level will automatically take advantage of leases. Therefore, if the poll interval is seconds, the lease will expire every 80 seconds. Versions of the Scheduler after Version 5. This allows customers to use larger poll intervals without sacrificing availability.
With versions later than 5. Therefore, the maximum time which a scheduler could become unavailable will be seconds regardless of the poll interval. About leases Leases were not used prior to Version 5. When redundant Schedulers were added, availability was increased; however, contention also increased.
You could not add more then one redundant Scheduler without sacrificing performance. Each task would be loaded and run on each server, but only one would run successfully. All other duplicate tasks would simply abort when the collision was detected.
By re-running the DDL that creates the tables, the new tables will be created without affecting existing data see Related topics for details on how to create the tables. Once the tables are created, the Scheduler will automatically start using leases to manage redundant Scheduler contention. In Figure 6, a Scheduler resource is duplicated on three distinct servers in the same cell. The poll daemon for the Scheduler on Server1 of Node A has the lease and will load and process tasks.
The other two poll daemons on Nodes B and C will remain idle until the scheduler daemon on Server1 is no longer able to renew its lease due to failure. Figure 6. Redundant Schedulers with independent servers View image at full size Alternatively, in Figure 7, with the WebSphere Network Deployment Manager, each server could instead be a cluster member. Each Scheduler resource is created on each cluster member, and each server has a running Scheduler instance.
In a complex topology, this method is preferred. Figure 7. Redundant Schedulers in a cluster View image at full size Recovery and delays The Scheduler uses fixed-delay calculation times. When a Scheduler becomes overloaded due to insufficient resources or long-running tasks, tasks may run late see Task latency in the Performance section. When recurring tasks are late executing, the next fire time is calculated on the actual fire time.
The bottom arrows indicate how tasks would run without any delays. The top arrows indicate what actual fire times will be, given that tasks 2 and 3 are missed due to an outage or delay. Here, the task that would normally have fired at instead fires at , immediately after the Scheduler became available. Figure 8. All tasks that were in the process of running will be re-run if they did not complete successfully its transaction committed.
The repeat count will reflect tasks that have run successfully see EJB transaction considerations for more details. Scalability The Scheduler will scale vertically with the addition of more processors and by increasing the number of alarms available on the WorkManager associated with the Scheduler. The Scheduler does not have the ability to natively scale horizontally. Because the Scheduler cannot be natively partitioned, it is up to the administrator and developer to partition the application over more than one Scheduler, create redundant Schedulers that talk to different databases or tables, or to partition the work that the Scheduler is executing, thereby increasing the available resources for the Scheduler daemon.
Partitioned Scheduler daemon To illustrate how an Administrator could partition a Scheduler into two partitions, examine Figure 9. Here, there are four nodes each with one server in a single cell.
An application is installed into the cell and deployed on all four servers. On each of the servers, a Scheduler resource is defined. Figure 9. Partitioned Scheduler View image at full size Normally, in this scenario, all Schedulers would communicate with the same database tables. With this configuration, the application remains the same, but the Scheduler work is partitioned among two different nodes. The Scheduler service does not have the concept of a "preferred" daemon. To force one Scheduler to obtain the lease in this scenario, the administrator would need to either: Delay starting of one poll daemon until the preferred poll daemon is active and has obtained the lease performs its first poll , or Stop one poll daemon until a redundant obtains the lease, which could take as much time as a Poll Interval see Forcing lease acquisition.
With this type of scenario, application developers and administrators must understand that because the Schedulers are now partitioned, applications will only see a subset of the scheduled tasks. Therefore, the application will need to look for tasks on both partitions when administering tasks that have been scheduled.
For example, if a task was created on Node A and the operator needs to cancel that task, the operator would need to know that it was created on Node A or would need to try to cancel it on all nodes.
Partition Scheduler with forced leases It may be more desirable for applications to be written such that tasks are categorized differently and, likewise, use different Schedulers for subsets. For example, creating a Scheduler for handling requests from employees with odd versus even ID numbers.
Again, the only way to guarantee that a lease is obtained on each Node is to force lease acquisition see Forcing lease acquisition. Figure
WebSphere Application Server V7 Administration and Configuration
IBM WebSphere Application Server
WebSphere Enterprise Scheduler planning and administration guide
WebSphere Application Server V7 Administration and Configuration Guide