Oracle is once again making people enter contact information just to receive, via email, a link to download "legacy" versions of the JDK.
Saturday, February 27, 2010
Posted by Steve Shabino at 4:41 PM 0 comments
Thursday, February 11, 2010
Dumb Discussions with Dell
Silly me to think that Dell could answer a basic question about its own product via its "Chat with an Agent" website feature. Here's my chat transcript. Thoughts?
Posted by Steve Shabino at 11:21 PM 2 comments
Friday, January 4, 2008
Drools on JRockit
So, I've obviously ignored my blog for way too long. I'm going to catch up on the project entries real-soon-now. But, in the meantime we've been trying out Drools on JRockit, and we have good results to report.
I compared the JRockit implementation of 1.5.0_12 with the Sun version. To start up Drools with a rather large set of derived facts (inserted in the consequence of various rules), JRockit was 1/3 faster out of the box with no tuning other than setting a large max heap size. I measured 10 successive start-up runs because JRockit's aggressive precompilation caused the first run to be significantly slower than subsequent ones. These tests were on a 32bit Windows box.
Also, large blocks do make a difference. JRockit allows for large block allocation on OSes that support it (I'm no expert on memory allocation, so be warned). On Linux (RHEL 4) with a 10 gig (yes, 10 gig!) heap, we improved Drools start-up by 20% by turning on -XXlargePages support.
We're going to continue to tune JRockit for our large heap. GC pauses are still really long (well over a minute), but we really only care about start-up speed. Responsiveness doesn't matter until we're ready to make tons of working memory queries. When we learn more, I'll report back.
We're a Weblogic shop, so there are no licensing issues involved for us to use JRockit, and we expect to embed Drools within a Weblogic instance anyway. As always, your mileage may vary.
Posted by Steve Shabino at 12:23 PM 0 comments
Friday, November 2, 2007
More Requirements
Today, we have a system in place that provides users with a ranked list of worker assignments for a job. It's pretty good at coming up with a useful list, but it's a little slow and "mysterious. "
Mysterious?
Users want to know why the system gave out the suggestions it did and why it didn't give out some other seemingly obvious ones. Our support team fields questions like this all the time:
- I know there's a worker right around the corner from this job site. Why didn't the system put him on the list? Possible Answer: This worker doesn't have the proper certification to do a job of type X.
- Why was Worker A ranked lower than Worker B? Worker A hasn't done a job for a bit, but Worker B just completed one. Possible Answer: Worker A was too far away from the job site and would have to start work late.
- What are the differences to the business between selecting Workers A, B, and C? Is one choice more profitable than another one?
So, the new system will be required to explain itself. Why did it select and rank the workers as it did? Why did it not select some seemingly obvious workers? And, what are the key attributes about potential choices that affect the business?
Posted by Steve Shabino at 5:48 PM 1 comments
Labels: drools
Monday, October 29, 2007
The Problem at Hand
So, this isn't really the problem, but it's close enough...
We assign workers to jobs at various client sites around the country. Although we learn about some of these jobs a few days ahead of time, most are rush jobs -- "How soon can you get here?" sorts of things. And, these jobs are specialized. Not all workers can perform all jobs. Special equipment is often required as are various worker certifications. Fairness matters too -- workers who haven't done a job in awhile get first dibs.
Does this sound like a problem for a rules engine yet?
Even though there are a lot of hard-and-fast rules, my client's employees still need to exercise some discretion when making assignment choices. So, our goal is to provide a ranked list of workers appropriate for particular jobs. Speed is important. Users won't wait for us to compute a list of workers each time they want to contemplate a decision. A half second is probably all the compute time we can tolerate.
This leads me to discuss the scale of our problem. We have about 500 jobs starting every day and about 1000 workers. While these numbers don't seem large at first, we're looking at half a million potential assignments at any given time. Some of the rules for assignment are straightforward: If the job requires the worker to have a particular certification, he or she must have the certification. However, many of the rules are more computationally intensive: How long will it take the worker to drive to the job site, and will he be able to start on time? A call out to our routing server takes a couple of seconds, and that alone is way beyond our SLA for our users.
Posted by Steve Shabino at 9:39 PM 3 comments
We're Piloting Drools
Hi. I'm Steve Shabino, and this is my first post on this blog. I'm a software consultant based in Cleveland, Ohio who works with Java technologies.
For the past couple of weeks, I have been working on a pilot project with my client where we are building a decision support system using Drools. In the following weeks, I will document our progress so that others may learn from our efforts. While these posts will be technically accurate, I will obfuscate the problem domain to protect my client's privacy. Although I'd really like to share details, I think I can represent all the important parts via the "fake project."
Posted by Steve Shabino at 9:24 PM 0 comments
Labels: drools