Clustering Quartz Jobs

I was looking for a scale-out option with scheduling jobs and having used quartz previously, found that it is pretty easy to get clustering up and running pretty easily. The only caveat being that it is possible only with JDBC job store. The sample I tried with was a straight-forward job that just prints the time and the scheduler which has triggered it.

Sample Job:

import org.quartz.*;

@PersistJobDataAfterExecution
public class PrintJob implements Job {

   public void execute(JobExecutionContext jobExecutionContext) throws JobExecutionException {
      try {
         System.out.println("Print : "+System.currentTimeMillis()+" , "+jobExecutionContext.getScheduler().getSchedulerInstanceId());
      } catch (SchedulerException e) {
         e.printStackTrace();
      }
   }
}

Sample Trigger:

import org.quartz.*;
import org.quartz.impl.StdSchedulerFactory;

import java.io.*;
import java.util.Properties;

import static org.quartz.JobBuilder.newJob;
import static org.quartz.SimpleScheduleBuilder.simpleSchedule;

public class PrintScheduler {

	private Scheduler scheduler;
	public PrintScheduler(String instanceId) {
		try {
			Properties properties = loadProperties();
			properties.put("org.quartz.scheduler.instanceId",instanceId);
			scheduler = new StdSchedulerFactory(properties).getScheduler();
			scheduler.start();
		} catch (Exception e) {
			e.printStackTrace();
		}
	}

	private Properties loadProperties() throws FileNotFoundException,IOException {
		Properties properties = new Properties();
		try (InputStream fis = PrintScheduler.class.getResourceAsStream("quartz.properties")) {
			properties.load(fis);
		}
		return properties;
	}

	public void schedule() throws SchedulerException {
		JobDetail job = newJob(PrintJob.class).withIdentity("printjob", "printjobgroup").build();
		Trigger trigger = TriggerBuilder.newTrigger().withIdentity("printTrigger", "printtriggergroup")
				.startNow().withSchedule(simpleSchedule().withIntervalInMilliseconds(100l).repeatForever()).build();
		scheduler.scheduleJob(job, trigger);
	}

	public void stopScheduler() throws SchedulerException {
		scheduler.shutdown();
	}

	public static void main(String[] args) {
		PrintScheduler printScheduler = new PrintScheduler(args[0]);
		try {
//			printScheduler.schedule();
			Thread.sleep(60000l);
			printScheduler.stopScheduler();
		} catch (Exception e) {
			e.printStackTrace();
		}
	}

}

Please note, I have used quartz 2.x for this example.

On the configuration side, more-or-less it remains the same as for single node with couple of exceptions –

org.quartz.scheduler.instanceName = PRINT_SCHEDULER1
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 4
org.quartz.threadPool.threadsInheritContextClassLoaderOfInitializingThread = true

#specify the jobstore used
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.StdJDBCDelegate
org.quartz.jobStore.useProperties = false

#The datasource for the jobstore that is to be used
org.quartz.jobStore.dataSource = myDS

#quartz table prefixes in the database
org.quartz.jobStore.tablePrefix = qrtz_
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.isClustered = true
org.quartz.scheduler.instanceId = PRINT_SCHEDULER1

#The details of the datasource specified previously
org.quartz.dataSource.myDS.driver = com.mysql.jdbc.Driver
org.quartz.dataSource.myDS.URL = jdbc:mysql://localhost:3307/blog_test
org.quartz.dataSource.myDS.user = root
org.quartz.dataSource.myDS.password = root
org.quartz.dataSource.myDS.maxConnections = 20<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span>

The configurations that are cluster specific here are –  org.quartz.jobStore.isClustered and org.quartz.scheduler.instanceId. In case of a single node instance, org.quartz.jobStore.isClustered is marked as false. In case of a cluster setup, it is changed to true. The second property that needs to be changed is on the instanceId which is like a name/ID used to uniquely identify the scheduler instance in the cluster. This property can be marked as AUTO in which case, each scheduler instance will be automatically assigned with a unique value, or you can choose to provide a value on your own (which I find useful since it helps me identify where the job is running). But, please note that the uniqueness is still to be maintained.

One of the requirement for this to work is to have time sync between the nodes running the scheduler instances or there might be issues with the schedule. Also, there is no guarantee that there will be equal load distribution amongst the nodes with clustering. As per the documentation, quartz ideally prefers to run the job on the same node in case it is not currently on load.

Code @ https://github.com/vageeshhoskere/blog/tree/master/quartz

Multinode cluster setup in Hadoop 2.x

It’s been quite some time since I wanted to join the distributed processing bandwagon and finally got my lazy self to actually do something about it and started investing some time to learn and experiment with few technologies – some old, some not so old and some new – the first of which was Hadoop..

So naturally, the next step was to setup Hadoop in a cluster setup… The setup process, contrary to any misgivings that I may have had, was quite simple and straight forward and all that needed to be done was follow a series of steps –

  •  First off, choosing the cluster configuration – I decided to use a cluster with one name-node/resource manager and 3 other data-nodes/node managers. For simplicity, let’s call them as hadoopmasternode and hadoopdatanode1, hadoopdatanode2 and hadoopdatanode3
  • Once I had all 4 RHEL systems in place, second step was to download the latest stable release of Hadoop 2.x – which happened to be 2.6 during the writing of this post… The downloaded tar.gz archive was extracted to the /opt/hadoop folder
  • Hadoop also needs JDK to be present, which can easily be downloaded from the Oracle Java download site, which in my case happened to be JDK8
  • Next update /etc/hosts file on all the systems to include all the cluster nodes
  • It is better to have a separate user for using hadoop – So create a new user using the commands “useradd -U -m hadoopuser” and “usermod -g root hadoopuser”
  • Now that the hadoop user is created, it is time to make this user the owner of hadoop files – “chown -R hadoopuser:hadoopuser /opt/hadoop”
  • Login as hadoopuser (“su – hadoopuser”) and edit/update the hadoop environment variables for hadoopuser
    • The bash shell needs to be updated with the hadoop variables for which we would need to edit ~/.bashrc (“vi ~/.bashrc”) and append the file with below updates

      export JAVA_HOME=<JAVA_HOME>( ex. /usr/java/jdk1.8.0_40/)
      export HADOOP_HOME=/opt/hadoop
      export HADOOP_CONF_DIR=/opt/hadoop/etc/hadoop
      export PATH=$PATH:$HADOOP_HOME/bin
      export PATH=$PATH:$HADOOP_HOME/sbin
      export YARN_HOME=$HADOOP_HOME
      export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
      export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
      
    • Next is to update the JAVA_HOME variable with the path to Java install location in hadoop settings file hadoop-env.sh under /opt/hadoop/etc/hadoop folder…
  • Once the settings are updated, the same needs to be sourced by running the command “source ~/.bashrc”
  • Now that the hadoop environment settings are updated, the next step is to update the hadoop and yarn settings/configuration for the cluster which are basically a bunch of XML files present in $HADOOP_HOME/etc/hadoop folder
    • First is to edit the core-site.xml and provide the namenode details –

      <property>
         <name>fs.defaultFS</name>
         <value>hdfs://hadoopmasternode:9000</value>
      </property>
      
    • Next, update yarn-site.xml file with Yarn specific configurations –

                <property>
                         <name>yarn.nodemanager.aux-services</name>
                           <value>mapreduce_shuffle</value>
                </property>
                <property>
                           <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
                           <value> org.apache.hadoop.mapred.ShuffleHandler</value>
                </property>
                <property>
                            <name>yarn.resourcemanager.resource-tracker.address</name>
                            <value>hadoopmasternode:9010</value>
                </property>
                <property>
                          <name>yarn.resourcemanager.scheduler.address</name>
                           <value>hadoopmasternode:9020</value>
                </property>
                <property>
                          <name>yarn.resourcemanager.address</name>
                           <value>hadoopmasternode:9030</value>
                </property>
      
    • Copy mapred-site.xml.template file as mapred-site.xml and then mark Yarn as the mapreduce framework by adding following properties to mapred-site.xml file

                <property>
                          <name>mapreduce.framework.name</name>
                           <value>yarn</value>
                </property>
                <property>
                          <name>mapred.job.tracker</name>
                           <value>hadoopmasternode:9040</value>
                </property>
      
  • Please note, these steps need to replicated on all the nodes of the cluster
  • Once all the nodes are made ready create the namenode folder in hadoopmasternode –
    • Run command “mkdir -p /opt/hadoop/hdfs_data/namenode” to create the namenode directory
    • Update hadoop configuration files to indicate the namenode folder and the number of data nodes by editing the $HADOOP_HOME/etc/hadoop/hdfs-site.xml file and including the below properties –

                <property>
                          <name>dfs.replication</name>
                           <value>3</value>
                </property>
                <property>
                          <name>dfs.namenode.name.dir</name>
                          <value>file:/opt/hadoop/hdfs_data/namenode</value>
                </property>
      
  • Next on the hadoopmasternode, mark the master and slave node details one-per-line in $HADOOP_HOME/etc/hadoop/masters and $HADOOP_HOME/etc/hadoop/slaves files (Make sure you create the file if it does not exist…) respectively.
  • Hadoop needs needs to be able to communicate from the masternode to the data nodes via SSH without being asked for password authentication. In order to achieve this, the data nodes needs to have the namenode added to its authorized keys… This can be done by the following steps –
    • Use command “ssh-keygen -t rsa -P “” -f ~/.ssh/id_rsa” to generate a key
    • Add this key to the list of authorized keys by running “cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys”
    • Run the command “ssh-copy-id -i ~/.ssh/id_rsa.pub hadoopuser@hadoopdatanode1” , “ssh-copy-id -i ~/.ssh/id_rsa.pub hadoopuser@hadoopdatanode2” and “ssh-copy-id -i ~/.ssh/id_rsa.pub hadoopuser@hadoopdatanode3” to ensure that communication between hadoopmasternode and all three data nodes is authorized
  • Next, on each of the data node, create the datanode folder – “mkdir –p $HADOOP_HOME/hdfs_data/datanode” and update the hadoop configuration to point to the created folder by editing the $HADOOP_HOME/etc/hadoop/hdfs-site.xml and adding the following properties –
    • <property>
                <name>dfs.replication</name>
                <value>3</value>
      </property>
      <property>
                <name>dfs.datanode.name.dir</name>
                <value>file:/opt/hadoop/hdfs_data/datanode</value>
      </property>
      
  • Once all the datanodes are ready, switch back to the hadoopmasternode and format the hdfs file system by running the command “$HADOOP_HOME/bin/hdfs namenode -format –clusterId HADOOP_CLUSTER” which will create a cluster called HADOOP_CLUSTER
  • Once the cluster is formatted, start the hdfs filesystem and the yarn resource manager by running the commands “$HADOOP_HOME/sbin/start-dfs.sh” and “$HADOOP_HOME/sbin/start-yarn.sh” respectively
  • After the services are started, cluster health can be checked @ http://hadoopmasternode:50070/dfshealth.html#tab-overview

At any point, in-case there is a need to shut down the resourcemanager and filesystem, run the scripts $HADOOP_HOME/sbin/stop-yarn.sh and $HADOOP_HOME/sbin/stop-dfs.sh respectively.

Now the hadoop cluster is ready for use…

Commenting XML content using Java

SAX parser can be used to add new comments or comment current content from an xml. JDOM gives an element called comment that can be used to create and write comments to a file. The below sample program details the way to comment out content from XML file.

Sample XML:

<?xml version="1.0" encoding="UTF-8"?>
<bookbank>
	<book type="fiction" available="yes">
		<name>Book1</name>
		<author>Author1</author>
		<price>Rs.100</price>
	</book>
	<book type="novel" available="no">
		<name>Book2</name>
		<author>Author2</author>
		<price>Rs.200</price>
	</book>
	<book type="biography" available="yes">
		<name>Book3</name>
		<author>Author3</author>
		<price>Rs.300</price>
	</book>
</bookbank>

Sample Program:


package blog.sample.code;

import java.io.FileWriter;
import java.io.StringWriter;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;

import org.jdom.Comment;
import org.jdom.Document;
import org.jdom.Element;
import org.jdom.input.SAXBuilder;
import org.jdom.output.XMLOutputter;

public class XMLCommentTest {

   public XMLCommentTest() throws Exception{
      String outputFile = "C:\\blog\\sample.xml";
      SAXBuilder builder = new SAXBuilder();
      Document document = builder.build(outputFile);
      Element root = document.getRootElement();
      List list = root.getChildren("book");
      List newList = new ArrayList();
      Iterator itr = list.iterator();
      while(itr.hasNext()){
      Element ele = itr.next();
         if(ele.getAttributeValue("type").equalsIgnoreCase("biography")){
            java.io.StringWriter sw = new StringWriter();
            XMLOutputter xmlOutput = new XMLOutputter();
            xmlOutput.output(ele, sw);
            Comment comment = new Comment(sw.toString());
            itr.remove();
            newList.add(comment);
         }
      }
      for(Comment com : newList){
         root.addContent(com);
      }
      document.setRootElement(root);
      XMLOutputter xmlOutput = new XMLOutputter();
      //xmlOutput.output(document, System.out);
      xmlOutput.output(document, new FileWriter(outputFile));
   }

   public static void main(String[] args) {
      try {
         new XMLCommentTest();
      } catch (Exception e) {
         e.printStackTrace();
      }
   }

}

The output of the above program is the updated sample.xml with the below content:

<?xml version="1.0" encoding="UTF-8"?>
<bookbank>
	<book type="fiction" available="yes">
		<name>Book1</name>
		<author>Author1</author>
		<price>Rs.100</price>
	</book>
	<book type="novel" available="no">
		<name>Book2</name>
		<author>Author2</author>
		<price>Rs.200</price>
	</book>
	<!-- <book type="biography" available="yes">
		<name>Book3</name>
		<author>Author3</author>
		<price>Rs.300</price>
	</book> -->
</bookbank>

==

Creating update site for Eclipse Plug-in

The eclipse plugin developed can be exported to be accessed via the update-site. This involves creating a Feature project for the plugin and uploading the same to a file (ftp)/web server, from where, the same can be accessed using a URL via the Eclipse plugin installer.
The first step involved in the procedure is to create a feature project for the plugin created.

  • Choose File -> New -> Plug-in Development -> Feature Project and click Next
  • Enter the Name for the project and select Next1
  • Once done, select the plugin that is to be bundled as a feature from the plug-ins list –2
  • Click on Finish to create the project
  • Next, a new update site project is to be created which will refer to the feature just created. Navigate to File -> New -> Plug-in development -> Update Site Project to create a new update site and then give the same a meaningful name.
  • In the site.xml file that opens up, click on “New Category” to create a category definition that will be added to the software repository
  • Provide a meaningful ID and name for the category and optionally also add description for the category
  • After creating the category, the feature must be linked to the same to ensure when installing the plug-in from the update site, the required feature will be listed under the category selected.2 (1)
  • In the Archives tab of the site.xml, provide information on the Name, URL and description of the FTP server where the plug-in is to be hosted
  • Now the update site is created. Next select the category and click on the “Build All” button to package the plugin. Once built, the package will be exported to the root folder of the current update site project itself
  • The plug-in is ready to be installed. Go to Help -> Install New Software to add the local site (project location from the previous step) to the repository and install the plugin.2 (2)
  • Restart Eclipse to use the plugin installed

Unit testing with JUnit for Hibernate using HSQLDB (In-Memory)

Performing JUnit/unit test on hibernate code base can be accomplished using HSQLDB database. HSQLDB provides two ways of implementing this.

    Using file system as database
  • Using HSQLDB in-memory database.

Using in-memory database needs no change to your code other than changes in the hibernate.cfg.xml ( hibernate configuration file). The configuration below uses the hsqldb in-memory database to test an Employee entity.

hibernate.cfg.xml –


<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN"
                                         "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd">
<hibernate-configuration>
 <session-factory name="">
  <property name="hibernate.connection.driver_class">org.hsqldb.jdbcDriver</property>
  <property name="hibernate.connection.url">jdbc:hsqldb:mem:testdb;shutdown=false</property>
  <property name="hibernate.connection.username">sa</property><!-- default username -->
  <property name="hibernate.connection.password"/><!-- default password -->
  <property name="hibernate.connection.pool_size">10</property>
  <property name="hibernate.connection.autocommit">true</property>
  <property name="hibernate.cache.provider_class">org.hibernate.cache.HashtableCacheProvider</property>
  <property name="hibernate.hbm2ddl.auto">create-drop</property><!-- creates the tables from the entites automatically -->
  <property name="show_sql">true</property>
  <property name="dialect">org.hibernate.dialect.HSQLDialect</property>
  <mapping class="blog.hibernate.employee.Employee"/>
 </session-factory>
</hibernate-configuration>


package blog.hibernate.employeeTest;


import static org.junit.Assert.*;

import java.io.File;

import org.hibernate.Session;
import org.hibernate.SessionFactory;
import org.hibernate.cfg.AnnotationConfiguration;
import org.hibernate.cfg.Configuration;
import org.junit.BeforeClass;
import org.junit.Test;

import blog.hibernate.employee.Employee;
import blog.hibernate.employee.EmployeeManager;

public class EmployeeTest {
	
	private static Configuration config;
    	private static SessionFactory factory;
	private static Session hibernateSession;
	
	
	@BeforeClass
	public void init() {
		config = new AnnotationConfiguration();
		config.configure(new File("hibernate.cfg.xml"));
        	factory = config.buildSessionFactory();
        	hibernateSession = factory.openSession();
    	}
	
	@Test
	public void insertEmployee(){
		String empName = "Employee1";
		String empLocation = "India";
		//Add new employee using the session created by HSQLDB configuration
		EmployeeManager.addEmployee(new Employee(empName,empLocation),hibernateSession);
		Employee emp = EmployeeManager.getEmployeeByName(empName,hibernateSession);
		assertEquals("India", emp.getLocation());
	}
}


The hibernate.hbm2ddl.auto option makes sure that there is no need to explicitly create the schema for the entities, but is not true in case of any depending tables that are not a part of the hibernate entities. In such a case, the schema needs to be created explicitly.

It is important to note that the in-memory schema is persisted only for that particular session. Once the session is closed, the schema is lost. If there is a need for persistence across tests ( for different sessions), then the file system DB can be used as an option.

Cron Trigger,Jobs and Expressions in Quartz Scheduler

Cron triggers in Quartz can be used to schedule a job based on an expression ( more like a reg-ex) defining when precisely the job execution is allowed. The cron trigger can be used to schedule a job only on specific days of month, or a definite time range for specific days etc. The cron expression is a string which comprises of 7 elements of which 6 are mandatory.

Cron expression involves 6 or 7 fields in a string separated by whitespace. The fields and their allowed values are (individually or as a range) –

Seconds -> 0-59
Minutes -> 0-59
Hours -> 0-23
Day of month -> 1-31
Month –> 1-12 or JAN-DEC
Day of week -> 1-7 or SUN-SAT

And the optional field of
Year -> 1970-2099

Along with the literal values for the fields, a cron expression also allows a few general and specific ( for particular fields ) wildcard entries to be used in the string. General wildcard entries include “*” signifying any value from the allowed range of values, “,” signifying combination of values for example 1,2,3 for day of month, the range specifier “-“, the “/” wildcard entry, which signifies increment. For example using 2/10 in the minutes field signifies that the job has to be fired every 10th minute starting from the second minute of the hour i.e., 2nd ,12th ,22nd ,32nd ,42nd ,52nd minute of the hour and the “?” literal which signifies that there is no specific value. The “?” literal is used especially in cases of day of week and day of month where one might influence the other. For example, to specify all weekdays of a month, the “?” literal can be used in the field day of month and MON-FRI in day of week field so that cron is fired on all weekdays irrespective of the date

Some of the special wildcard entries which are specific to “day of month” field are L and W –where L specifies the last day of the month and W signifies the weekday closest to the given day, for example 10W specifies the weekday after 10th if the 10th falls on a weekend else the day itself. Please note that the usage of W does not spill-over to the next month, in that, if, 30W is specified then, if 30 and 31 are weekend then, the job is not executed on 1st of the next month but, is executed on the weekday next to 30th of the coming month.

The other wildcard entries are L and # used in the “day of week” field. The “L” entry has the same meaning here, in that, it signifies the last day of the week (ie., 7 or SAT ) also, it can be additionally used with a value from 1-7 which makes it the last particular day of the month – for example, 5L specifies the last Thursday of the month. The # wildcard entry on the other hand is used to specify the nth day of the month – for example 3#2 means second Tuesday of the month ( 3 -> Tuesday and #2 -> position in the month )

Let us convert our Alarm scheduler application from the previous posts to use a cron-tigger, so that the Alarm is scheduled only for the weekdays along with a break for Christmas –

Sample program – AlarmSchedule.java


import java.text.ParseException;
import java.util.Calendar;
import java.util.Date;
import java.util.GregorianCalendar;
import java.util.Properties;

import org.quartz.CronTrigger;
import org.quartz.JobDetail;
import org.quartz.Scheduler;
import org.quartz.SchedulerException;
import org.quartz.SchedulerFactory;
import org.quartz.impl.StdSchedulerFactory;
import org.quartz.impl.calendar.AnnualCalendar;

public class AlarmSchedule {

	public AlarmSchedule(){
		try{
			//Create the annual calendar object - to add Christmas holiday to our Alarm Job fire exception
			AnnualCalendar holidays = new AnnualCalendar();
			Calendar christmas = new GregorianCalendar(2011, 11, 25);
			holidays.setDayExcluded(christmas, true);
			Properties prop = new Properties();
			prop.setProperty("org.quartz.jobStore.class", "org.quartz.simpl.RAMJobStore");
			prop.setProperty("org.quartz.threadPool.class", "org.quartz.simpl.SimpleThreadPool");
			prop.setProperty("org.quartz.threadPool.threadCount", "4");
			SchedulerFactory schdFact = new StdSchedulerFactory(prop);
			Scheduler schd = schdFact.getScheduler();
			//add the Calendar object created to the scheduler with a string identifier to it
			schd.addCalendar("holidays", holidays, false, true);
			schd.start();
			JobDetail jd = new JobDetail("alarmjob", Scheduler.DEFAULT_GROUP, AlarmJob.class);
			//Define a cron job such that the job executes every weekday at 06:00
			CronTrigger t = new CronTrigger("alarmtrigger", Scheduler.DEFAULT_GROUP, "0 0 6 ? * MON-FRI *");
			//set the calendar associated with the trigger
			t.setCalendarName("holidays");
			t.getJobDataMap().put("auth_name", "Vageesh");
			t.setStartTime(new Date());
			schd.addJobListener(new AlarmJobListener());
			jd.addJobListener("Alarm gone");
			schd.scheduleJob(jd, t);
			System.out.println(schd.getSchedulerName());
		}
		catch(SchedulerException e){
			e.printStackTrace();
		}
		catch(ParseException e){
			e.printStackTrace();
		}
	}
	
	public static void main(String[] args) {
		new AlarmSchedule();
	}
}

In the above program, the line


CronTrigger t = new CronTrigger("alarmtrigger", Scheduler.DEFAULT_GROUP, "0 0 6 ? * MON-FRI *");

creates a new cron-trigger that is scheduled for every weekday at 0600 hrs. The string “0 0 6 ? * MON-FRI *” specifies –

0 – 0th second
0 – 0th minute
6 – 6 AM hours
? – no specific day of the month
“*” – all months
MON-FRI – only weekdays
“*” – any year

Using Calendar in Quartz Scheduler for Job fire skip

Quartz Calendars can be used by the scheduler to block of a list of days, range of time or particular days of the year/month/week from the scheduler fire timings. Attaching a calendar onto a trigger ensures that the trigger does not get fired on date/time as defined by the Calendar.

There are different types of Calendar already available or a new Calendar can be using the Quartz calendar interface. List of available calendars on quartz can be got here

The below sample shows the use of one such Calendar – WeeklyCalendar that disables job fire on weekends – perfect for our AlarmScheduler application. The method of using it is to first create an object of the WeeklyCalendar and then add it onto the scheduler along with a string name through which it can be further referenced. Then, this string name is used as an argument in setting the calendar name for the trigger.

import java.util.Calendar;
import java.util.Date;
import java.util.Properties;

import org.quartz.*;
import org.quartz.impl.StdSchedulerFactory;
import org.quartz.impl.calendar.WeeklyCalendar;

import static org.quartz.CronScheduleBuilder.dailyAtHourAndMinute;
import static org.quartz.JobBuilder.newJob;

public class AlarmSchedule {

	public AlarmSchedule(){
		try{
			//Create the weekly calendar object - This by default handles disaabling job fire on weekends so no need
			//to explicitly set
			WeeklyCalendar weeklyOff = new WeeklyCalendar();
			//example of adding an excluded day of the week - This excludes fridays from job firing schedule
			//weeklyOff.setDayExcluded(Calendar.FRIDAY, true);
			SchedulerFactory schdFact = new StdSchedulerFactory();
			Scheduler schd = schdFact.getScheduler();
			//add the Calendar object created to the scheduler with a string identifier to it
			schd.addCalendar("weeklyOff", weeklyOff, false, true);
			schd.start();
			JobDetail job = newJob(AlarmJob.class).withIdentity("alarmjob", "alarmjobgroup").build();
			Trigger trigger = TriggerBuilder.newTrigger()
					.withIdentity("alarmtrigger", "alarmtriggergroup")
					.startNow()
					.withSchedule(dailyAtHourAndMinute(6, 30))
					.modifiedByCalendar("weeklyOff")
					.build();
			schd.scheduleJob(job,trigger);
		}
		catch(SchedulerException e){
			e.printStackTrace();
		}
	}

	public static void main(String[] args) {
		new AlarmSchedule();
	}
}

AlarmJob.java –

import java.util.Date;

import org.quartz.Job;
import org.quartz.JobExecutionContext;
import org.quartz.JobExecutionException;

public class AlarmJob implements Job {

	@Override
	public void execute(JobExecutionContext arg0) throws JobExecutionException {
		System.out.println("WAKE UP CALL "+new Date());
	}

}

Code @ Git – https://github.com/vageeshhoskere/blog

JUnit Test case for Java Classes

Junit is a framework that can be used to perform unit test cases on the classes written. Junit-4.0 provides for an option for the Test cases to be used along with the class by including the annotation @Test which tells the framework to run it as a test case.

Tests are run using assertions on whether the actual and expected output matches or not. Below is a sample program (JUnitSample.java) which has two methods add and divide, both taking two integer arguments and return integer result. This program also includes a test method addTest which asserts on the method add ( test case used within the class itself). The other class is the JUnit test case class (JunitSampleTest.java) which is the junit class for the former.

JunitSample.java –


package sample;

import static org.junit.Assert.assertEquals;

import org.junit.Test;

public class JunitSample {

	public JunitSample(){
		
	}
	
	public int add(int a , int b){
		return(a+b);
	}
	
	public int divide(int a , int b){
		try{
			/**
			 * if b is zero there is divide by zero exception and the method returns
			 * the value of a itself
			 */
			a = a/b;
		}
		catch(Exception e){
			e.printStackTrace();
		}
		return a;
	}
	/*
	 * JUnit test method which asserts on whether the method add is adding the two numbers
	 * 3 and 2 correctly or not
	 */
	@Test
	public void addTest(){
		assertEquals(5, new JunitSample().add(3, 2));
	}
	
	public static void main(String[] args) {
		new JunitSample();
	}

} 

JunitSampleTest.java –

package sample;

import static org.junit.Assert.*;

import org.junit.Test;

public class JunitSampleTest {
	
	@Test
	public void addTest(){
		assertEquals(5, new JunitSample().add(3, 2));
	}
	
	@Test
	public void divideTest(){
		assertEquals(1, new JunitSample().divide(3, 2));
	}
	
	@Test
	public void divideErrorTest(){
		/**
		 * This methods asserts that the method divide on passing 0 as second argument 
		 * returns the first argument as the method throws division by 0 exception as 
		 * defined by JUnitSample.divide()
		 */
		assertEquals(3,new JunitSample().divide(3, 0));
	}
	
}

XML Parsing using JAXB

Jaxb can be used for reading xml file and storing it as a java object. It uses the binding classes to bind a schema definition file with the java class objects. Generating the binding classes can be done by installing JAXB and then running the command xjc.bat for windows or xjc.sh for linux. The sample program below demonstrates reading a xml file

The command to generate binding classes using xjc would be –

<JAXB_INSTALL_LOCATION>/bin/xjc.bat <Schema file>

Sample.xml

<?xml version="1.0" encoding="UTF-8"?>
<JavaCollectionUtils>
	<Type name="List" >
		<impl name="arraylist"/>
		<impl name="linkedlist"/>
	</Type>
	<Type name="map">
		<impl name="HashMap"/>
	</Type>
	<Type name="Table">
		<impl name="HashTable"/>
	</Type>
</JavaCollectionUtils>

Sample.xsd

<?xml version="1.0" encoding="UTF-8"?>
<xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema">
  <xs:element name="JavaCollectionUtils" type="JavaCollectionUtilsType"/>
  <xs:complexType name="implType">
    <xs:simpleContent>
      <xs:extension base="xs:string">
        <xs:attribute type="xs:string" name="name" use="optional"/>
      </xs:extension>
    </xs:simpleContent>
  </xs:complexType>
  <xs:complexType name="TypeType">
    <xs:sequence>
      <xs:element type="implType" name="impl" maxOccurs="unbounded" minOccurs="0"/>
    </xs:sequence>
    <xs:attribute type="xs:string" name="name" use="optional"/>
  </xs:complexType>
  <xs:complexType name="JavaCollectionUtilsType">
    <xs:sequence>
      <xs:element type="TypeType" name="Type" maxOccurs="unbounded" minOccurs="0"/>
    </xs:sequence>
  </xs:complexType>
</xs:schema>

import java.io.File;
import java.util.List;

import javax.xml.XMLConstants;
import javax.xml.bind.JAXBContext;
import javax.xml.bind.JAXBElement;
import javax.xml.bind.Unmarshaller;
import javax.xml.validation.Schema;
import javax.xml.validation.SchemaFactory;

import jaxbBind.ImplType;
import jaxbBind.JavaCollectionUtilsType;
import jaxbBind.TypeType;


public class JaxbTest {

	/**
	 * @param args
	 */
	public static void main(String[] args) {
		try{
			//jaxbBind is the package that contains all the jaxb bind classes
			JAXBContext context = JAXBContext.newInstance("jaxbBind");
			//Create an unmarshaller instance to convert xml to java object
			Unmarshaller unmarsh = context.createUnmarshaller();
			SchemaFactory sf = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI);
			//specify the schema definition for parsing
			Schema schema = sf.newSchema(new File("sample.xsd"));
			unmarsh.setSchema(schema);
			//unmarshall the xml file
			JAXBElement<JavaCollectionUtilsType> obj = (JAXBElement<JavaCollectionUtilsType>) unmarsh.unmarshal(new File("sample.xml"));
			JavaCollectionUtilsType collectionUtils = (JavaCollectionUtilsType) obj.getValue();
			//get a list of all tags of type `Type`
			List<TypeType> collectionTypes = collectionUtils.getType();
			for(TypeType collectionType: collectionTypes){
				System.out.println(collectionType.getName());
				//get a list of all tags of type `impl` for a particular Type
				List<ImplType> implTypes = collectionType.getImpl();
				for(ImplType implType : implTypes){
					System.out.println(implType.getName());
				}
			}
		}
		catch(Exception e){
			//The program throws exception if the xml does not confirm to the shema defined
			e.printStackTrace();
		}
		
	}

}

Rescheduling Job from JDBCJobStore in Quartz Scheduler

The last post mentioned about using JDBCJobStore to store Quartz related information so as to ensure that job related details are available permanently and the job can be rescheduled in case the system experiences some outage and downtime.

The sample program below uses the stored job (database) from the previous post and reschedule the PrintStatefulJob

Please note that this program too uses the properties file which has the same set of properties as the previous program.

Sample Program – file quartz.properties

org.quartz.scheduler.instanceName = PRINT_SCHEDULER1
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 4
org.quartz.threadPool.threadsInheritContextClassLoaderOfInitializingThread = true

#specify the jobstore used
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.StdJDBCDelegate
org.quartz.jobStore.useProperties = false

#The datasource for the jobstore that is to be used
org.quartz.jobStore.dataSource = myDS

#quartz table prefixes in the database
org.quartz.jobStore.tablePrefix = qrtz_
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.isClustered = true
org.quartz.scheduler.instanceId = PRINT_SCHEDULER1

#The details of the datasource specified previously
org.quartz.dataSource.myDS.driver = com.mysql.jdbc.Driver
org.quartz.dataSource.myDS.URL = jdbc:mysql://localhost:3307/blog_test
org.quartz.dataSource.myDS.user = root
org.quartz.dataSource.myDS.password = root
org.quartz.dataSource.myDS.maxConnections = 20
 

Sample program PrintRescheduler.java

import org.quartz.*;
import org.quartz.impl.StdSchedulerFactory;

import static org.quartz.SimpleScheduleBuilder.simpleSchedule;

public class PrintRescheduler {

	private Scheduler scheduler;
	public PrintRescheduler() {
		try {
			scheduler = new StdSchedulerFactory().getScheduler();
			scheduler.start();
		} catch (Exception e) {
			e.printStackTrace();
		}
	}

	public void reSchedule() throws SchedulerException {
		String triggerName = "printTrigger";
		String triggerGroup = "printtriggergroup";
		Trigger oldTrigger = scheduler.getTrigger(TriggerKey.triggerKey(triggerName, triggerGroup));
		//use the same trigger builder so that we do not have to worry about change in name/group
		TriggerBuilder triggerBuilder = oldTrigger.getTriggerBuilder();
		Trigger newTrigger = triggerBuilder
				.withSchedule(simpleSchedule()
						.withIntervalInMilliseconds(200l)
						.repeatForever())
				.build();
		scheduler.rescheduleJob(TriggerKey.triggerKey(triggerName, triggerGroup),newTrigger);
	}

	public void stopScheduler() throws SchedulerException {
		scheduler.shutdown();
	}

	public static void main(String[] args) {
		PrintRescheduler printRescheduler = new PrintRescheduler();
		try {
			Thread.sleep(10000l);
			printRescheduler.reSchedule();
			Thread.sleep(10000l);
			printRescheduler.stopScheduler();
		} catch (Exception e) {
			e.printStackTrace();
		}
	}

}

 

Code @ Github – https://github.com/vageeshhoskere/blog/tree/master/quartz