We needed a Java client for OpenTSDB to store our time series data and so I'll share the result here.
Below, I'll be fleshing out two ways of storing data in OpenTSDB using the HTTP API with Java. The first is using POST, where the payload is a JSON Object containing all of our query parameters. The second way uses GET and all of the query parameters are in the request string.
Let's jump in.
We'll be using OpenTSDB version 2.0.1 today.
So for starters, let's do the POST approach.
First, open your HTTP connection in the usual way. Make sure it's set to POST and you're setting DoOutput and DoInput.
URL url = new URL(urlString + "/api/query/");
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
conn.setRequestProperty("Accept", "application/json");
conn.setRequestProperty("Content-type", "application/json");
conn.setRequestMethod("POST");
conn.setDoOutput(true);
conn.setDoInput(true);
Now get yourself an OutputStreamWriter.
OutputStreamWriter writer = new OutputStreamWriter(httpConnection.getOutputStream());
At a minimum, OpenTSDB requires a start time, end time and metric for querying data. The start and end times should be expressed in epoch time using a long. It's also a good idea to specify the aggregator you want to use although that isn't required.
long startEpoch = 1420088400L;
long endEpoch = 1425501345L;
String metric = "warp.speed";
String aggregator = "sum";
I'll also add a couple of tags in order to show how to do it below.
So now, we create our main JSONObject.
JSONObject mainObject = new JSONObject();
mainObject.put("start", startEpoch);
mainObject.put("end", endEpoch);
So far so good. Next, we need our query parameters. This includes the aggregator and metric. We will put these in a JSONObject which, in turn, will live inside a JSONArray.
JSONArray queryArray = new JSONArray();
JSONObject queryParams = new JSONObject();
queryParams.put("aggregator", aggregator);
queryParams.put("metric", metric);
And we put this into the array.
queryArray.put(queryParams);
Now, if you're using tags, add them next.
JSONObject queryTags = new JSONObject();
queryTags.put("starshipname", "Enterprise");
queryTags.put("captain", "JamesKirk");
queryTags.put("hullregistry", "NCC-1701");
And add that to the queryParams object.
queryParams.put("tags", queryTags);
Now add the queryArray to the main object.
mainObject.put("queries", queryArray);
You can now write this JSONObject as a String to your connection.
String queryString = mainObject.toString();
writer.write(queryString);
writer.flush();
writer.close();
Now, the server will respond with a response code. If it's good (HTTP_OK) then you should be able to get the response data.
int HttpResult = httpConnection.getResponseCode();
if(HttpResult == HttpURLConnection.HTTP_OK){
result = readHTTPConnection(httpConnection);
}
And here's the readHTTPConnection method:
public static String readHTTPConnection(HttpURLConnection conn){
StringBuilder sb = new StringBuilder();
BufferedReader br;
try {
br = new BufferedReader(new InputStreamReader(conn.getInputStream(),"utf-8"));
String line = null;
while ((line = br.readLine()) != null) {
sb.append(line + "\n");
}
br.close();
} catch (UnsupportedEncodingException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return sb.toString();
}
What you get back is a string that can be turned into a JSONArray.
return new JSONArray(result);
Now, that's the way I like to do it because working with a bunch of String appenders feels clunky to me, but if you're a fan of that approach, here's how you can run that same query using an HTTP GET request.
long startEpoch = 1420088400L;
long endEpoch = 1425501345L;
String metric = "warp.speed";
String aggregator = "sum";
String result = "";
So now we manually build our request string:
StringBuilder builder = new StringBuilder();
builder.append("?start=");
builder.append(startEpoch);
builder.append("&end=");
builder.append(endEpoch);
builder.append("&m=sum:");
builder.append(metric);
Now, if we have tags, they should be in pairs and the entire set is enclosed in curly braces and separated by commas.
builder.append("{starshipname=Enterprise,captain=JamesKirk,hullregistry=NCC-1701}");
URL url = new URL(urlString + "/api/query/" + builder.toString());
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
conn.setRequestProperty("Accept", "application/json");
conn.setRequestProperty("Content-type", "application/json");
conn.setRequestMethod("GET");
And we get the result data back exactly the same way we did above.
int HttpResult = httpConnection.getResponseCode();
if(HttpResult == HttpURLConnection.HTTP_OK){
result = readHTTPConnection(httpConnection);
}
return new JSONArray(result);
So the GET approach is shorter but I personally prefer the POST method. Either way will get you the same results.
Wednesday, March 11, 2015
Monday, February 16, 2015
Storing Data in OpenTSDB Using Java
Not a lot of examples exist for showing how to use Java to store a time series in OpenTSDB. It isn't complicated, but there are a couple of noteworthy items to keep in mind.
The OpenTSDB I'm using is running on a 3 node distributed cluster built on the following:
CentOS 6
Apache Zookeeper 3.4.6
Hadoop 2.4.0
HBase 0.98.8
OpenTSDB 2.0.1
Java 7
This post assumes you have a working OpenTSDB instance.
The first thing to understand is that the OpenTSDB HTTP API is what makes it universal, in terms of what languages you can use to build an interface with the database. Ultimately what gets passed into the OpenTSDB URL is a JSON array containing all of the time points to be stored.
In this example, we're storing ECG time series data for an anonymous patient. Due to HIPAA regulations, even the specific date the data was gathered is off limits, so we will arbitrarily choose 12:00 AM on 1 January 2015 as the start time for this time series.
Step 1: Create an ArrayList of dumb data objects to represent the time series. Each instance of the DDO will represent a single time point. There is an example of a suitable class on OpenTSDB's GitHub repository. The class should have, at minimum, a field for the timestamp, a field for the data value, the metric to be stored and a series of tags.
The metric is the unit being stored. In an ECG time series, for example, the value is microvolts. Since ECGs store data in multiple simultaneous channels, I am creating a separate metric for each.
NOTE: Before you can store values using a particular metric, you must register that metric in the database!
Use the command mkmetric to register your new metric. For example:
./tsdb mkmetric ecg.V6.uv
where ecg.V6.uv is the new metric being created.
In my own version of the IncomingDataPoint, I changed the type for the field "value" to an int. This resulted in the resulting JSON array looking the same as in OpenTSDB's documentation.
Timestamps should be expressed in epoch format when stored in the IncomingDataPoint. In the case of my time series, the data comes in at variable sampling rates, usually around 500 Hz. OpenTSDB can store time series in intervals as small as one millisecond, which is sufficient for this rate. The epoch value for 12:00 AM 1 January 2015 is 1420088400000, with resolution to the millisecond.
Tags are an optional way to add details to individual time points that can be used in searches for data as well as provide meta data. They are stored in the dumb data object as a HashMap.
Example:
ArrayList dataPoints = new ArrayList();
HashMap tags = new HashMap();
tags.put("format","phillips103");
tags.put("subjectId","NCC1701");
dataPoints.add(new IncomingDataPoint("ecg.V6.uv", 1420088400000, 5, tags));
Once your ArrayList contains all of the data points, it needs to be converted into a JSON array using GSON.
Gson gson = new Gson();
String json = gson.toJson(dataPoints);
Now all that remains is to open up the URL connection and send the data. Since we're using the OpenTSDB HTTP API we'll be using port 4242 and the put operation. An example URL string looks like this
String urlString = "http://myexampleopentsdb.com/api/put";
Next we open the url:
HttpURLConnection httpConnection = TimeSeriesUtility.openHTTPConnection(urlString);
OutputStreamWriter wr = new OutputStreamWriter(httpConnection.getOutputStream());
And we write our JSON array to it:
wr.write(json);
wr.flush();
wr.close();
We should also listen for the response code, since OpenTSDB will provide feedback that may be useful.
int HttpResult = httpConnection.getResponseCode();
Next, we talk about how to query the data back...
The OpenTSDB I'm using is running on a 3 node distributed cluster built on the following:
CentOS 6
Apache Zookeeper 3.4.6
Hadoop 2.4.0
HBase 0.98.8
OpenTSDB 2.0.1
Java 7
This post assumes you have a working OpenTSDB instance.
The first thing to understand is that the OpenTSDB HTTP API is what makes it universal, in terms of what languages you can use to build an interface with the database. Ultimately what gets passed into the OpenTSDB URL is a JSON array containing all of the time points to be stored.
In this example, we're storing ECG time series data for an anonymous patient. Due to HIPAA regulations, even the specific date the data was gathered is off limits, so we will arbitrarily choose 12:00 AM on 1 January 2015 as the start time for this time series.
Step 1: Create an ArrayList of dumb data objects to represent the time series. Each instance of the DDO will represent a single time point. There is an example of a suitable class on OpenTSDB's GitHub repository. The class should have, at minimum, a field for the timestamp, a field for the data value, the metric to be stored and a series of tags.
The metric is the unit being stored. In an ECG time series, for example, the value is microvolts. Since ECGs store data in multiple simultaneous channels, I am creating a separate metric for each.
NOTE: Before you can store values using a particular metric, you must register that metric in the database!
Use the command mkmetric to register your new metric. For example:
./tsdb mkmetric ecg.V6.uv
where ecg.V6.uv is the new metric being created.
In my own version of the IncomingDataPoint, I changed the type for the field "value" to an int. This resulted in the resulting JSON array looking the same as in OpenTSDB's documentation.
Timestamps should be expressed in epoch format when stored in the IncomingDataPoint. In the case of my time series, the data comes in at variable sampling rates, usually around 500 Hz. OpenTSDB can store time series in intervals as small as one millisecond, which is sufficient for this rate. The epoch value for 12:00 AM 1 January 2015 is 1420088400000, with resolution to the millisecond.
Tags are an optional way to add details to individual time points that can be used in searches for data as well as provide meta data. They are stored in the dumb data object as a HashMap.
Example:
ArrayList
HashMap
tags.put("format","phillips103");
tags.put("subjectId","NCC1701");
dataPoints.add(new IncomingDataPoint("ecg.V6.uv", 1420088400000, 5, tags));
Once your ArrayList contains all of the data points, it needs to be converted into a JSON array using GSON.
Gson gson = new Gson();
String json = gson.toJson(dataPoints);
Now all that remains is to open up the URL connection and send the data. Since we're using the OpenTSDB HTTP API we'll be using port 4242 and the put operation. An example URL string looks like this
String urlString = "http://myexampleopentsdb.com/api/put";
Next we open the url:
HttpURLConnection httpConnection = TimeSeriesUtility.openHTTPConnection(urlString);
OutputStreamWriter wr = new OutputStreamWriter(httpConnection.getOutputStream());
And we write our JSON array to it:
wr.write(json);
wr.flush();
wr.close();
We should also listen for the response code, since OpenTSDB will provide feedback that may be useful.
int HttpResult = httpConnection.getResponseCode();
Next, we talk about how to query the data back...
Thursday, April 3, 2014
My Liferay Startup Action Killed My Hot Deploys
or... where have all my custom portlets gone?
Liferay 6.1.1
CentOS
So I have an ever-evolving Extension plugin I use for the portals I'm responsible for. Among other things, it provides custom authentication and applies a custom new user form for users accessing the site for the first time. Since this form requires custom fields I implemented a startup action class to create the new fields for the user in Liferay's expando table the first time the portal starts up after having the extension applied.
It installed fine, I restarted the portal with no problems, the new authenticator worked fine, the new user form worked fine. All was well.
Except that no custom portlets appeared on my pages anymore.
Huh?
The portlet app folders were still right there under webapps, and on startup Liferay had no errors of any kind. I tried re-installing a portlet just to see what would happen and...
...nothing. The hot deploy listener wasn't responding to the .war file in the deploy folder.
Eventually I narrowed it down to my custom startup action. My portlet-ext.properties file had the following line:
global.startup.events=edu.jhu.cvrg.utilities.setup.CVRGStartupAction
When I commented out this line and deployed my extension again (in a fresh copy of the portal) the hot deploy listener worked fine.
Interesting...
So I commented out all the code in the startup action class and re-enabled it. Once again, deployments ceased to work.
Now, Liferay tells us that this global startup event runs once during the portal startup. Well, that's exactly what I wanted. There is however, another property...
application.startup.events
Which runs for each site on the portal. In the case of a portal running only one site, this is functionally equivalent. At least, it's supposed to be...
But when I changed the line in portal-ext.properties so it read
application.startup.events=edu.jhu.cvrg.utilities.setup.CVRGStartupAction
The extension worked perfectly, and so do hot deploys.
I'd really like to hear from a Liferay dev on this. Is this a bug, or was I applying something incorrectly?
Liferay 6.1.1
CentOS
So I have an ever-evolving Extension plugin I use for the portals I'm responsible for. Among other things, it provides custom authentication and applies a custom new user form for users accessing the site for the first time. Since this form requires custom fields I implemented a startup action class to create the new fields for the user in Liferay's expando table the first time the portal starts up after having the extension applied.
It installed fine, I restarted the portal with no problems, the new authenticator worked fine, the new user form worked fine. All was well.
Except that no custom portlets appeared on my pages anymore.
Huh?
The portlet app folders were still right there under webapps, and on startup Liferay had no errors of any kind. I tried re-installing a portlet just to see what would happen and...
...nothing. The hot deploy listener wasn't responding to the .war file in the deploy folder.
Eventually I narrowed it down to my custom startup action. My portlet-ext.properties file had the following line:
global.startup.events=edu.jhu.cvrg.utilities.setup.CVRGStartupAction
When I commented out this line and deployed my extension again (in a fresh copy of the portal) the hot deploy listener worked fine.
Interesting...
So I commented out all the code in the startup action class and re-enabled it. Once again, deployments ceased to work.
Now, Liferay tells us that this global startup event runs once during the portal startup. Well, that's exactly what I wanted. There is however, another property...
application.startup.events
Which runs for each site on the portal. In the case of a portal running only one site, this is functionally equivalent. At least, it's supposed to be...
But when I changed the line in portal-ext.properties so it read
application.startup.events=edu.jhu.cvrg.utilities.setup.CVRGStartupAction
The extension worked perfectly, and so do hot deploys.
I'd really like to hear from a Liferay dev on this. Is this a bug, or was I applying something incorrectly?
Tuesday, January 28, 2014
Resolving java.lang.ArrayIndexOutOfBoundsException: 0 on startup action in Liferay
So Liferay has a great system for creating global startup actions. You configure the portal-ext.properties to run a class derived from AppStartupAction and Liferay runs it during startup.
I found a problem however...
20:27:35,500 ERROR [pool-2-thread-1][MainServlet:325] java.lang.ArrayIndexOutOfBoundsException: 0
java.lang.ArrayIndexOutOfBoundsException: 0
at com.liferay.portal.util.PortalInstances._getDefaultCompanyId(PortalInstances.java:318)
at com.liferay.portal.util.PortalInstances.getDefaultCompanyId(PortalInstances.java:82)
...
Strange, isn't it? The companyId is provided by the PortalInstances class which should have been initialized, right?
The problem is that Liferay's MainServlet runs the global startup events BEFORE it initializes the PortalInstances object. Since PortalInstances supplies the default companyId, it isn't available yet if you need it in your custom startup action.
The solution I used was to modify the MainServlet.java class to place the global startup actions right after company initialization.
I found a problem however...
20:27:35,500 ERROR [pool-2-thread-1][MainServlet:325] java.lang.ArrayIndexOutOfBoundsException: 0
java.lang.ArrayIndexOutOfBoundsException: 0
at com.liferay.portal.util.PortalInstances._getDefaultCompanyId(PortalInstances.java:318)
at com.liferay.portal.util.PortalInstances.getDefaultCompanyId(PortalInstances.java:82)
...
Strange, isn't it? The companyId is provided by the PortalInstances class which should have been initialized, right?
The problem is that Liferay's MainServlet runs the global startup events BEFORE it initializes the PortalInstances object. Since PortalInstances supplies the default companyId, it isn't available yet if you need it in your custom startup action.
The solution I used was to modify the MainServlet.java class to place the global startup actions right after company initialization.
Thursday, September 19, 2013
Using Hudson and Liferay
This is one of those things that seems really intimidating when you first start, and then even more intimidating when you start Googling it and realize that there isn't much out there to help you, and what there is... Isn't very helpful. Once again, those resources all take for granted that the reader knows everything there is to know (which leaves me wondering what the purpose of the resource is).
Novices need love too.
So here's the scenario: You need to stand up a Hudson system to handle your portlets (or whatever Liferay plugins) and have never used Hudson before.
Fear not.
It's actually not difficult at all.
Our setup here:
Server: CentOS
Portal: Liferay 6.1
Hudson: Eclipse Edition
Code Repo: GitHub
The first thing you must do is to get the latest installation guide from Hudson. I found The Hudson Book to be pretty good and easy to read.
FOLLOW THOSE INSTRUCTIONS TO THE LETTER. I'm not kidding about that. Do it. It's not complex. In the case of my team, I installed it on a CentOS box as a service. That gets you your basic Hudson installed.
Now, if this thing is living on the same box as your Liferay server, you'll need to tell Hudson to use a port other than the one your Liferay will be using. In my case, I set it to port 8585 by doing the following:
Go to /etc/sysconfig and open the "hudson" file in your text editor.
Change HUDSON_HOME from port 8080 to port 8585
Save it.
Now, in my case, I had a problem where Hudson wouldn't start because it was pointing to a Java 1.5 JRE on my system and it required at least 1.6. My JAVA_HOME variable was set to a Java 1.7 JDK but Hudson didn't use that. Beware. It looks in /usr/bin for your Java. Our quick fix was to put a symbolic link in there to make it point to the correct Java version.
Now, I'm going to assume here that you either already have Liferay installed on your server, or that you know how. There's plenty of info on that so go ahead and get that done then come back here.
All done? Ok. Next, you'll need the Liferay SDK on that machine. Install it and configure it, using Liferay's documentation, so that the SDK is pointing to the instance of Liferay on your server.
Here's what's going to happen. When Hudson pulls code from the repo, it's going to put it in the SDK and run the Ant command to use the build script that comes with Liferay to build and deploy the plugin. That means that the hudson user needs to have the correct permissions on the SDK. In our case, we simply changed the ownership of the SDK to a user group that the hudson user was a part of. Keep in mind this also applied to the deploy folder in Liferay, as the Hudson user needs to be able to copy the .war file into it, and whatever user you're running Liferay under needs to be able to pull the file from that folder and delete it.
Now you'll need to go into Hudson and start setting up jobs.
In the Hudson interface, click "New Job"
Give the job a name and make sure "Build a free-style software job" is selected, then hit Ok
Most of these settings are a matter of preference and tuning, so you won't really know how they should be until after you run this for a while. The ones you need in order to get Husdon working are the ones I'll mention.
Under Advanced Job Options (You may have to click the "Advanced" button to see this) check "Use custom workspace" and fill in the directory path where you want the code to go. This should be in your Liferay SDK. For example: "/opt/liferay/liferay-plugins-sdk-6.1.1/portlets/welcome-portlet" in the case of a portlet named "welcome-portlet."
Under Source code management, configure it to point to the project in your repo. In our case, that's Github, so I selected Git and gave it the URL to the project repo (remember to include the .git extension at the end).
If you want Hudson to automatically perform a build when the code rep is updated, you'll want to choose the Poll SCM trigger under "Build Triggers." The schedule field takes a basic cron expression. (For example, "0 * * * *" tells it to run every hour on the hour.)
Under the "Build" section, you'll need to use Ant. This is the default for how Liferay projects are built. (You'll need Ant installed on your server.) Select the Ant version you're using and in the "Targets" field simply enter "deploy." This tells Hudson to run the deploy target on Ant just as you would in the command line if you were building the portlets manually. When that happens, the plugin will be built and copied by Ant into the deploy folder and the process from Hudson's perspective is done.
Novices need love too.
So here's the scenario: You need to stand up a Hudson system to handle your portlets (or whatever Liferay plugins) and have never used Hudson before.
Fear not.
It's actually not difficult at all.
Our setup here:
Server: CentOS
Portal: Liferay 6.1
Hudson: Eclipse Edition
Code Repo: GitHub
The first thing you must do is to get the latest installation guide from Hudson. I found The Hudson Book to be pretty good and easy to read.
FOLLOW THOSE INSTRUCTIONS TO THE LETTER. I'm not kidding about that. Do it. It's not complex. In the case of my team, I installed it on a CentOS box as a service. That gets you your basic Hudson installed.
Now, if this thing is living on the same box as your Liferay server, you'll need to tell Hudson to use a port other than the one your Liferay will be using. In my case, I set it to port 8585 by doing the following:
Go to /etc/sysconfig and open the "hudson" file in your text editor.
Change HUDSON_HOME from port 8080 to port 8585
Save it.
Now, in my case, I had a problem where Hudson wouldn't start because it was pointing to a Java 1.5 JRE on my system and it required at least 1.6. My JAVA_HOME variable was set to a Java 1.7 JDK but Hudson didn't use that. Beware. It looks in /usr/bin for your Java. Our quick fix was to put a symbolic link in there to make it point to the correct Java version.
Now, I'm going to assume here that you either already have Liferay installed on your server, or that you know how. There's plenty of info on that so go ahead and get that done then come back here.
All done? Ok. Next, you'll need the Liferay SDK on that machine. Install it and configure it, using Liferay's documentation, so that the SDK is pointing to the instance of Liferay on your server.
Here's what's going to happen. When Hudson pulls code from the repo, it's going to put it in the SDK and run the Ant command to use the build script that comes with Liferay to build and deploy the plugin. That means that the hudson user needs to have the correct permissions on the SDK. In our case, we simply changed the ownership of the SDK to a user group that the hudson user was a part of. Keep in mind this also applied to the deploy folder in Liferay, as the Hudson user needs to be able to copy the .war file into it, and whatever user you're running Liferay under needs to be able to pull the file from that folder and delete it.
Now you'll need to go into Hudson and start setting up jobs.
In the Hudson interface, click "New Job"
Give the job a name and make sure "Build a free-style software job" is selected, then hit Ok
Most of these settings are a matter of preference and tuning, so you won't really know how they should be until after you run this for a while. The ones you need in order to get Husdon working are the ones I'll mention.
Under Advanced Job Options (You may have to click the "Advanced" button to see this) check "Use custom workspace" and fill in the directory path where you want the code to go. This should be in your Liferay SDK. For example: "/opt/liferay/liferay-plugins-sdk-6.1.1/portlets/welcome-portlet" in the case of a portlet named "welcome-portlet."
Under Source code management, configure it to point to the project in your repo. In our case, that's Github, so I selected Git and gave it the URL to the project repo (remember to include the .git extension at the end).
If you want Hudson to automatically perform a build when the code rep is updated, you'll want to choose the Poll SCM trigger under "Build Triggers." The schedule field takes a basic cron expression. (For example, "0 * * * *" tells it to run every hour on the hour.)
Under the "Build" section, you'll need to use Ant. This is the default for how Liferay projects are built. (You'll need Ant installed on your server.) Select the Ant version you're using and in the "Targets" field simply enter "deploy." This tells Hudson to run the deploy target on Ant just as you would in the command line if you were building the portlets manually. When that happens, the plugin will be built and copied by Ant into the deploy folder and the process from Hudson's perspective is done.
Friday, April 5, 2013
My ManagedProperty is null!
I've actually been told by a developer or two that I shouldn't use Managed Properties because "they don't work right."
The scenario:
JSF 2.0
Tomcat 6.0.32
Java 1.6
Imagine the following Managed Bean, which requires a value from another Managed Bean:
@ManagedBean(name = "downloadBacking")
@ViewScoped
public class DownloadBacking implements Serializable {
private static final long serialVersionUID = 4778576272893200307L;
@ManagedProperty("#{userModel.username}")
private String userID;
public DownloadBacking(){
System.out.println("UserID is: " + userID;
}
public String getUserID() {
return userID;
}
public void setUserID(String userID) {
this.userID = userID;
}
}
What would the output be from the above code when the object is instantiated, if the value of userModel.username is "Magnus the Red"?
A) "UserID is: Magnus the Red"
B) Casting error
C) NullPointerError
D) "Ahriman had better watch out..."
The correct answer is C. This is why I'm told that ManagedProperty doesn't work right.
The truth is that ManagedProperty works just fine, if you understand the order in which things are happening. The problem here is that JSF used the setUserID() method to inject the value of userModel.username into the object. Well, that can't happen until the object has been instantiated, right? Well if the reference to the ManagedProperty is taking place in the constructor, how can it already have the value of the ManagedProperty in it if the object isn't even completely instantiated yet?
The solution here is to NOT put references to a ManagedProperty in your constructor. Instead, use the @PostConstruct annotation with an initialize() method to run your code AFTER the value has been injected into the newly created bean.
@ManagedBean(name = "downloadBacking")
@ViewScoped
public class DownloadBacking implements Serializable {
private static final long serialVersionUID = 4778576272893200307L;
@ManagedProperty("#{userModel.username}")
private String userID;
@PostConstruct
public void initialize(){
System.out.println("UserID is: " + userID;
}
public String getUserID() {
return userID;
}
public void setUserID(String userID) {
this.userID = userID;
}
}
Now the output will be A).
The scenario:
JSF 2.0
Tomcat 6.0.32
Java 1.6
Imagine the following Managed Bean, which requires a value from another Managed Bean:
@ManagedBean(name = "downloadBacking")
@ViewScoped
public class DownloadBacking implements Serializable {
private static final long serialVersionUID = 4778576272893200307L;
@ManagedProperty("#{userModel.username}")
private String userID;
public DownloadBacking(){
System.out.println("UserID is: " + userID;
}
public String getUserID() {
return userID;
}
public void setUserID(String userID) {
this.userID = userID;
}
}
What would the output be from the above code when the object is instantiated, if the value of userModel.username is "Magnus the Red"?
A) "UserID is: Magnus the Red"
B) Casting error
C) NullPointerError
D) "Ahriman had better watch out..."
The correct answer is C. This is why I'm told that ManagedProperty doesn't work right.
The truth is that ManagedProperty works just fine, if you understand the order in which things are happening. The problem here is that JSF used the setUserID() method to inject the value of userModel.username into the object. Well, that can't happen until the object has been instantiated, right? Well if the reference to the ManagedProperty is taking place in the constructor, how can it already have the value of the ManagedProperty in it if the object isn't even completely instantiated yet?
The solution here is to NOT put references to a ManagedProperty in your constructor. Instead, use the @PostConstruct annotation with an initialize() method to run your code AFTER the value has been injected into the newly created bean.
@ManagedBean(name = "downloadBacking")
@ViewScoped
public class DownloadBacking implements Serializable {
private static final long serialVersionUID = 4778576272893200307L;
@ManagedProperty("#{userModel.username}")
private String userID;
@PostConstruct
public void initialize(){
System.out.println("UserID is: " + userID;
}
public String getUserID() {
return userID;
}
public void setUserID(String userID) {
this.userID = userID;
}
}
Now the output will be A).
Tuesday, August 7, 2012
Including External jars in your Liferay Extension
This is one that I've seen asked about a lot online, and the solution is so painfully simple that I can only conclude that the people who know the answer aren't bothering to help out those who are asking.
That's what this blog is for.
The environment:
Liferay 6.0
Apache Ant
You're trying to use the ant deploy command either from within Eclipse or on the command line but you have a dependency on another .jar file you need to include in the compilation. If you do this in Eclipse, and include the .jar file in your project build path in Eclipse, the IDE will add a line to your any .classpath file pointing to this dependency.
The problem is that doesn't help your ant find the file.
If you need to include external .jar resources in your Ext, just drop a copy of the .jar you need in your Project/docroot/ext-lib/global folder. Your ant deploy target should now be able to use it fine.
That's what this blog is for.
The environment:
Liferay 6.0
Apache Ant
You're trying to use the ant deploy command either from within Eclipse or on the command line but you have a dependency on another .jar file you need to include in the compilation. If you do this in Eclipse, and include the .jar file in your project build path in Eclipse, the IDE will add a line to your any .classpath file pointing to this dependency.
The problem is that doesn't help your ant find the file.
If you need to include external .jar resources in your Ext, just drop a copy of the .jar you need in your Project/docroot/ext-lib/global folder. Your ant deploy target should now be able to use it fine.
Subscribe to:
Comments (Atom)