A little Trick to install Window 8.1 on VirtualBox on Mac OS X

I’m trying to compile some google’s code(Chromium) .

So, I need to install a virtual machine on my MacBook Pro.

I have downloaded the Windows 8.1’s iso on MS’s official website for Windows.

And just trying to install it like other operation systems. Then I got a critical error:

Your PC needs to restart.
Please hold down the power button.
Error code: 0x000000C4
Parameters:
0×0000000000000091
0x000000000000000F
0xFFFFF801E5962A80
0×0000000000000000

And without any log.

It is so frustrated, isn’t it?

After some googling, I got an answer like this:

This problem is because that the Chip of the virtual machine lacks of an instruction(CMPXCHG16B).

And I found the explain on the WikiPedia here.

Here is the quote of the document.

Early AMD64 processors lacked the CMPXCHG16B instruction, which is an extension of the CMPXCHG8B instruction present on most post-80486 processors. Similar to CMPXCHG8B, CMPXCHG16B allows for atomic operations on octal words. This is useful for parallel algorithms that use compare and swap on data larger than the size of a pointer, common in lock-free and wait-free algorithms. Without CMPXCHG16B one must use workarounds, such as a critical section or alternative lock-free approaches.[41] Its absence also prevents 64-bit Windows prior to Windows 8.1 from having a user-mode address space larger than 8 terabytes.[42] The 64-bit version of Windows 8.1 requires the instruction.[43]

See the line here? The 64-bit version of Windows 8.1 requires the instruction.

This is the problem of the VirtualBox, it only supports all the AMD instructions by default.

So, to solve this problem, you need to run the command

VBoxManage setextradata [vmname] VBoxInternal/CPUM/CMPXCHG16B 1

The vmname should be your virtual machine’s name, and if you really don’t remember it, you can view the names using this command:

VBoxManage list vms

After this setup, your VirtualMachine is ready for Windows 8.1(though I really don’t like Windows).

Steps To Setup and FTP Server

Here is something really basic as FTP setup.

Yeah, ftp setup is easy and fun isn’t it?

What you need is just install an ftp server software, configure the users, and you’re done.

Piece of cake, right?

YOU ARE FUCKING WRONG!!!!!

I’ll write my steps for setting up an secure FTP server, in case this will help some freaking guy like me out.

You should use a good ftp software.

This could be a very easy choice if you’re using distribution like CentOS or RHEL.

They suggest you install vsftp as the ftp software. I’m not an expert at this domain, and as so far, vsftp works fine for me.

You should create the ftp user in the Linux and setup the permissions

vsftp using Linux’s user system and file system as its user system and file system, it’s a brilliant idea to have, since it can have the most sophisticated user permission system on the fly.

But, this requires you to treat your users and system more carefully, don’t make the folder opposed to FTP or FTP user to open, so anyone can update or read your file by ftp without any problem.

Fine, this is not the key point I want to make, so I make them as short as I can, let’s go to the KEY POINTS

1. You must setup SELinux to accept your FTP, or it will kill your vsftp when it tries to access the file system.

This is a very fucking thing, but it is true. If you didn’t tell SELinux that vsftp’s action is fine, SELinux will stop the action to keep folder safe.

SELinux can be your friend in many ways, so turn it down may not be a good option.

I have googled the ways to make these two things work together, and here is the way:

/usr/sbin/setsebool -P ftp_home_dir=1 

This command will update the SELinux’s policy, and let ftp application have the previldeges to access user’s home folders.

This command will take a little time to execute, but this is the easiest way to acchieve this target, believe me.

2. You must configure the iptables firewall to let FTP application to connect

This step is easy to understand, no one wants his server too open, so at the begining, iptables only let ICMP and SSH requests to access the ports of the server.

In order to let FTP application to access the server, you must open two ports, 20 for data transfer, and 21 for commands.

So the configuration for iptables should be like this:

-A INPUT -m state --state NEW -m tcp -p tcp --dport 21 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT

After this, your FTP application can connect to server then.

Are we finished yet?

NO!!!!, not yet.

You still can’t upload your files onto the server.

Why?!

Because:

VSFTP IS USING PASSIVE MODE BY DEFAULT, and the passive mode of FTP is like this:

  • FTP Client tell server: Let’s using passive mode
  • Server respond: You can connect to me using port xxxx for this transfer
  • Client open a tcp channel on local 2001 to server’s port xxxx to start

Yes, passive mode can make use more port on server than active mode, this is a better way to use, isn’t it?

But, did you remember, that we only allow port 21 and 20 for requests on iptables?

So, this is a very very very big problem for FTP applications.

They’ll confused by the server, server told them to open a connecto to port xxxx, but when they try, they’ll get a connection refused.

So, you need to:

3. Change the configuration of vsftpd to let passive mode to use only port of a range

For example, like this:

pasv_max_port=10100
pasv_min_port=10090

This only opens 10090 to 10100 port for passive mode.

Then

4. You need to chnage iptables configuration to let port 10090 to 10100 open for requests

-I INPUT -p tcp --dport 10090:10100 -j ACCEPT

Then your FTP server is done and secure, and if you want to make the transfer to be more secured, you can:

5. Adding SSL transfer support to vsftp

First you need to generate a self assigned ceritificate for SSL

cd /etc/vsftpd
/usr/bin/openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout vsftpd.pem -out vsftpd.pem

This command will generate a certificate for SSL and this ceriticate will valid through a year.

Then you need to change /etc/vsftpd.conf adding these lines

# Turn on SSL
ssl_enable=YES

# Allow anonymous users to use secured SSL connections
allow_anon_ssl=YES

# All non-anonymous logins are forced to use a secure SSL connection in order to
# send and receive data on data connections.
force_local_data_ssl=YES

# All non-anonymous logins are forced to use a secure SSL connection in order to send the password.
force_local_logins_ssl=YES

# Permit TLS v1 protocol connections
ssl_tlsv1=YES

# Permit SSL v2 protocol connections
ssl_sslv2=YES

# permit SSL v3 protocol connections
ssl_sslv3=YES

# Specifies the location of the RSA certificate to use for SSL encrypted connections
rsa_cert_file=/etc/vsftpd/vsftpd.pem

after these steps,

6. Restart all the services

service iptables restart
service vsftpd restart

And, you’re done.

So, what we learned today?

  1. It is very hard to be secure, especially for a very easy and foundamental service like FTP
  2. Linux is secure, only when you are understanding it more deeply and use it more carefully
  3. Don’t blame firewall for the problems, it protects you
  4. When something is wrong, maybe the only problem is at your understanding, so, read and ask before compian is a good way to solve the proble

The words about setup linux box into wireless router

I have worked for about 1 month on an instresting project. It has something to do with the captive portal.

I’m really a rookie in these technologies, though I have studied something about the network technologies, but never so deeply as this time.

After these days learning, I found out how powerful the Linux kernel is, and here is something I learned.

How to setup a wireless ap using Linux and an antenna using the bridge mode

If you want to make a Linux box to be a wireless ap, in the most easy mode(the bridge mode), you should have something like this:

  1. A Linux Box, have at least Linux Kernel 2.6 installed (I’m using CentOS 6.4, a pleasant distro to play on)
  2. An Ethernet card for the Linux Box, so it can connect to the router you want it to connect to
  3. An wireless antenna, and have the driver installed as a module of the kernel (it’s a long story, I’ll write another article about that)

Beware: Make sure your antenna supports running at the mode of master or monitor, you can check the running mode using iw tool, if your antenna
didn’t support at least master or monitor mode, you are doomed, you can’t make the antenna used as ap antenna

Then you can begin like this:

  1. You need to have hostapd installed, hostapd is needed to host your wireless antenna as an ap antenna, unfortunately,
    you can’t install this using yum, you must download the source code and compile it(not so hard).
  2. You need to create a bridge to bridge ethernet interface(say eth0) and wireless interface (say wlan0), it is very easy to create a bridge like this in RedHat, just edit the file /etc/sysconfig/network-scripts/ifcfg-eth0, and configuration added the bridge=br0, and change the wlan0’s file too, doing the same change, after that, restart the network, you have the bridge
  3. Configuration for hostapd is not very straight forward, and many options are there, you need to choose the wireless interface(wlan0 in most cases), the channels for the wireless ap, the password settings for the ap, and the running mode (802.11n or 802.11ac if your antenna supports it), there are many blogs about how to configure hostapd(like this), I won’t bother to make it detailed here
  4. You must give the bridge the ip, so you have to change the /etc/sysconfig/network-scritps/ifcfg-br0, so you need to using ifconfig to bring it up first, then change the file, give it a static ip or using DHCP to get an ip from the router

Then you are done.

Since this ap is running like a bridge, a bridge between wireless and ethernet, it is the most easy and robost way to run your linux box as an access point.

How to setup a wireless router using Linux and an antenna

Set up a wireless router is more complex than just an access point, but the first steps are the same, you need to have the hostapd installed and configured and then:

  1. Have dnsmasq intalled(you can install dhcpd as you want, since dnsmasq is more easy to config)
  2. Give a static ip to your wireless interface (make it an lan router, so give it the ip somehing like this 192.168.0.1)
  3. Make dnsmasq provide the listen on the wlan0 interface, so every device connect to the wlan0, can get its ip address and router(192.168.0.1) from it
  4. Configure iptables, allow devices access port for dhcp

For now, the device(phones or notebooks) can connect to your router, but they can’t get access to the wan, since your router don’t know how to get to the wan in your wlan0.

So you need:

  1. Add a NAT rule, something like this iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE , this is allow anything come from any interface have disguise that they are send from eth0(the WLAN port of you linux box router)
  2. Don’t forget, let kernel allow ip forward, net.ipv4.ip_forward=1
  3. And don’t forget, let ip tables allow forward from wlan0 too, iptables -A FORWARD -i wlan0 -j ACCEPT

After this, every ip packet come from wlan0 can have its way to eth0 and go out from the kernel, and this is the basic router working mode for your home router too.

Conclusion

It is not so hard(yet not very easy) to setup a linux box into a wireless router, but if you finally make it done, you won’t get it more better than the router you have purchased at the same price(since it has tunned kernel and hardware), but you can gain as many controll as you want, and keep hacking.

So, in a few

About the mod_perl and perl-Apache-Test conflict in installing Packetfence

It’s been a very long time since my last post.

Very busy these days. Now I’m working for a company try to create captive portal, I make packetfence a try.

It is a very nice application based on CentOS.

I started with CentOS 6.4, added the repositories it needed, and then:

yum install -y packent fence

What a nice day!

But bang!!!! what!?

file /usr/share/man/man3/Apache::Test.3pm.gz conflicts between attempted installs of perl-Apache-Test-1.30-2.el6.rf.noarch and mod_perl-2.0.4-10.el6.x86_64

file /usr/share/man/man3/Apache::TestConfig.3pm.gz conflicts between attempted installs of perl-Apache-Test-1.30-2.el6.rf.noarch and mod_perl-2.0.4-10.el6.x86_64

file /usr/share/man/man3/Apache::TestMB.3pm.gz conflicts between attempted installs of perl-Apache-Test-1.30-2.el6.rf.noarch and mod_perl-2.0.4-10.el6.x86_64

file /usr/share/man/man3/Apache::TestMM.3pm.gz conflicts between attempted installs of perl-Apache-Test-1.30-2.el6.rf.noarch and mod_perl-2.0.4-10.el6.x86_64

file /usr/share/man/man3/Apache::TestReport.3pm.gz conflicts between attempted installs of perl-Apache-Test-1.30-2.el6.rf.noarch and mod_perl-2.0.4-10.el6.x86_64

file /usr/share/man/man3/Apache::TestRequest.3pm.gz conflicts between attempted installs of perl-Apache-Test-1.30-2.el6.rf.noarch and mod_perl-2.0.4-10.el6.x86_64

file /usr/share/man/man3/Apache::TestRun.3pm.gz conflicts between attempted installs of perl-Apache-Test-1.30-2.el6.rf.noarch and mod_perl-2.0.4-10.el6.x86_64

file /usr/share/man/man3/Apache::TestRunPHP.3pm.gz conflicts between attempted installs of perl-Apache-Test-1.30-2.el6.rf.noarch and mod_perl-2.0.4-10.el6.x86_64

file /usr/share/man/man3/Apache::TestRunPerl.3pm.gz conflicts between attempted installs of perl-Apache-Test-1.30-2.el6.rf.noarch and mod_perl-2.0.4-10.el6.x86_64

file /usr/share/man/man3/Apache::TestServer.3pm.gz conflicts between attempted installs of perl-Apache-Test-1.30-2.el6.rf.noarch and mod_perl-2.0.4-10.el6.x86_64

file /usr/share/man/man3/Apache::TestSmoke.3pm.gz conflicts between attempted installs of perl-Apache-Test-1.30-2.el6.rf.noarch and mod_perl-2.0.4-10.el6.x86_64

file /usr/share/man/man3/Apache::TestTrace.3pm.gz conflicts between attempted installs of perl-Apache-Test-1.30-2.el6.rf.noarch and mod_perl-2.0.4-10.el6.x86_64

file /usr/share/man/man3/Apache::TestUtil.3pm.gz conflicts between attempted installs of perl-Apache-Test-1.30-2.el6.rf.noarch and mod_perl-2.0.4-10.el6.x86_64

The conflict in man!!! Man!!!

You got conflict in the Fcking manual that I wasn’t suppose to read and **failed!!!!!! the installation completely!!!!

I tried to google for that, and plenty of the problem like this

And, they won’t get anything as fixed….

So how can I go on?

I tried a dirty way.

First, I install the mod_perl and perl-devel using yum:

yum install -y mod_perl perl-devel

Then let’s download the perl-Apache-Test from CPAN:

wget http://search.cpan.org/CPAN/authors/id/P/PH/PHRED/Apache-Test-1.38.tar.gz

Then let’s install it:

perl Makefile.PL

make install

But, even we installed the f*cking perl-Apache-Test, we can’t be done, yet.

Yum won’t know you have installed the perl-Apache-Test, and will still try to download it and install it and failed your installation.

So, you need to test yum to skip it, adding 1 line to /etc/yum.conf

exclude=perl-Apache-Test

Then try the yum install again, you’ll be done.

It cost me 1 hour to do this investigation and how frustrated I was, so I wrote this blog to record it down, if this can helps anyone, the time and energy of mine won’t be wasted.

Welcome Lancelot

Some words

It’s been very busy these days, I was very busy doing my projects, and use my spare time create a proxy tool for Cocoa, called iProxy.

iProxy is a smart SSH tunnel proxy that can using SSH tunnel to bypass the firewall and using the PAC to choose which url should go through the proxy, and it can also choose free http proxy too.

I haven’t release it yet, since I’m coding for the smart PAC part and try to figure out how to charge for that, of course, the iProxy client is open source too.

That’s the reason that I haven’t update this blog for a very long time(about 2 months), after the launch of iProxy, I maybe have time to sit down and write something about the Objective-C and cocoa programming, and for now, I just need to perfect iProxy and make it more like a product, not a toy.

This is the additional words for this blog, this blog is going to announce the scripting tool I told before at my last blog.

This script tool, I called it lancelot. The details, is as below. You can find the code of lancelot at here: https://github.com/guitarpoet/lancelot

Wish you will like it, and try for it, at least, I’m enjoying using it in my projects now.

Lacenlot, tool for manage multiple daemon processes

Introduction & Purpose

The purpose for lancelot is to help to launch and mange the worker processes for
my working project.

Lancelot is used to solve the problem I encountered in my project:

  1. I need to launch many worker processes as daemon process on many slave \
    machines (say, 5 worker for each slave machine)
  2. After some configuration change, I need to restart all the worker processes \
    (sure, you can use apache zookeeper to handle this, but it will make the \
    worker process more complex, since the worker process just need to do some \
    very simple work (get the data from an url and store it into hdfs for example)
  3. I need to check for all the worker process’s status at a period of time \
    (or maybe I want to get an alert when 50% of the processes is down)

So, lancelot is used for:

  1. Launching as many worker processes as you want, and save the pid files as the \
    template you have gave it (something like /var/run/worker-1.pid), and the \
    redirect the standard output, error output to the templates you have gave it \
    (somthing like /var/log/worker-1.log /var/log/worker-1.err)
  2. Launching the worker processes as daemon using nohup or daemonize, lancelot \
    prefer daemonize than nohup, anyway, the worker process won’t exit until it \
    gets an error or itself should exit
  3. Check the status of all the worker process you have launched, using a ps \
    format, you can also set the ps format, you can use grep pattern or pid \
    template to check for the status
  4. Kill all the processes you want to kill by using a pid template or the pattern

And, for the ease to deploy lancelot to multiple slave machines, lancelot also has
a deploy function to deploy it to all the machines, this function is depend on
another script tool called taktuk.

Installation

For the installation, just unpack the package and copy lancelot to the destination
you want to put it to(/usr/local/lancelot for default).

And change the lancelot home variable in the lancelot script. Then add or ln lancelot
script to the PATH (usually /usr/local/bin/lancelot)

And you can use it.

Usage

You can use lancelot script like this

Lancelot command options

For example:
lancelot launch -c ~/launch_config -t 10 /bin/sh fetch_data.sh

Lancelot Launch

Description:     
    The process launching script.    
    Example:    launch -o /tmp/a-{}.log -e /tmp/a-{}.err -w /tmp /bin/ls .
    Email to guitarpoet@gmail.com if bug found.
Options:     
    -w|--workingdir:        The working dir for the launch
        This option requires 1 args.
    -o|--output:
        The standard output of the application
        This option requires 1 args.
    -e|--error:
        The error output of the appliaction
        This option requires 1 args.
    -p|--pid:
        The pid template for the application
        This option requires 1 args.
        Option is required.
    -c|--config:
        The configuration file location for lancelot launch
        This option requires 1 args.
    -t|--count:
        The process count that want lancelot to launch
        This option requires 1 args.

Lancelot Status

Description: 
    The process status script.
    Example:
    status -p /tmp/a-{}.pid
    Email to guitarpoet@gmail.com if bugs found.
Options: 
    -p|--pid:
        The pid template for the application
        This option requires 1 args.
        Option is required.
    -a|--application:
        The application pattern to grep
        This option requires 1 args.
    -c|--config:
        The configuration file location for lancelot launch
        This option requires 1 args.
    -f|--format:
        The ps format for this application, 'pid args' is by default
        This option requires 1 args.

Lancelot Kill

Description: 
    The process kill script.
    Example:
    kill -p /tmp/a-{}.pid
    Email to guitarpoet@gmail.com if bugs found.
Options: 
    -p|--pid:
        The pid template for the application
        This option requires 1 args.
        Option is required.
    -a|--application:
        The application pattern to grep
        This option requires 1 args.
    -c|--config:
        The configuration file location for lancelot launch
        This option requires 1 args.

Lancelot Broadcast

Description: 
    Broadcast exec the lancelot command to all the hosts
    Example:
    broadcast -h ~/hosts status -p /tmp/a-{}.pid
    Email to guitarpoet@gmail.com if bugs found.
Options: 
    -c|--config:
        The configuration file location for lancelot broadcast
        This option requires 1 args.
    -h|--hosts:
        The hosts that should launch the lancelot broadcast
        This option requires 1 args.
    -l|--login:
        The login name for login using taktuk
        This option requires 1 args.
TakTuk

Thinking on daemon process launching and management

For the project I recently working on (fetching data and analyse them), I setup a crawler and analyser cluster.

First many(as many crawler as to make the max of the server) crawlers must be spawned and configured.

Since the crawler can use zookeeper to configure it self, the configuration part is not need to considered.

The hard part of this architecture is that, you’ll need to launch many crawler instance by hand, and if you want to reconfigure them(for example, the data processing rules, they are loaded at the very beginning, and won’t get reload in the whole processing time, part for rules won’t change that much, and part for the rules need to be compiled).

Sure, I have wrote a script to launch and kill the crawlers, here is the thoughts on how to implement the script and what function should it have:

  • 1 It must be executable and nearly dependless on most of the Linux distributions

It is the same thought as the crawler since I want the crawler can be run on as many machine as possible, so the crawler is based on jersey(The javascript runtime I have written on Rhino, based on Java). As for this though, we have 3 options: bash, perl, Java

  • 2 It must be easy to deploy on many machines, or has the self publish function.

This is a very fundamental function of this launching script. Since I just tired of deploy the crawler over many machines, and deploy the script bring even more burden for me.

And the crucial part of this function is that, how to publish the scripts it self to many machines so that you can spawn many processes on that machine and keep management them.

For publish the script to other machine, we can use scp, a powerful, easy and safe way to copy resources from machine to machine. Just need a little configuration, you can copy the files from master to many slaves without interaction.

I can wrote a script to handle that, so that after I deploy the script on the master machine, I can deploy it automatically to many slave machines

For this function, I find out taktuk, a very good tool to play and use in order to manage many slave machines(say, install java or jersey for them). It use perl as its own language, and has more power on the publish, but I won’t talk about it very deeply in this article, since it don’t have the ability of spawning many processes and manage them(but you can really use it as the publish layer)

  • 3 It should be thin, light and costless for spawning processes

I don’t think run this script as a server is a good idea (at least for my opinion). The script just launching the processes with proper stdin, stderr, stdout, working directory and die after that. I don’t think keep this script running and wasting the resources is a wise idea. It is just a launcher after all .

  • 4 It should be configurable, say to launch how many process at 1 time

As I wrote on the above, I want the script to launch many crawler on 1 machine, so that the crawler will take as many resource on that machine as we can get. So, I want the script can spawn many crawler processes at 1 time, and each has its own stdout and std error.

And, yes, the slave machines’s ip should be configurable for this script.

  • 5 * It should launch the process as daemon*

Since the crawler runs day and night on the server. I don’t want it lives only in my ssh session. So I must make them an OS daemon to run in the background and do not harm by any signals or sessions. The script can use daemonize or nohup to achieve this

  • 6 It should provide the function to check the process running status

Since the crawlers are programs(and the number of them is not few), so no wonder some of the crawler gets this or that problem and stop working(say bugs, lack of memory and disk, java vm crushing or kernel crushing).

So, I need to check for their health, so, if some slave’s crawler has died, or 1 slave is restarted, I can know it when I check, or I can write a script to check that, if there is something wrong, it can send me an email about the problem, so I can get a plan to restart them or doing something else.

  • 7 It should provide the function to kill the processes running

This is very useful for redeploying the scripts, or redeploying the data analyse rules for all the crawlers. Say, if I have add another rule to the crawler, I need to publish the rule to all the slave machine(this can be done easily using taktuk), and I need to restart all the crawler processes.

So, for argument 1, 2 and 3, I think bash and perl is the best choice. And the publishing and remote executing can use taktuk to handle, I choose bash as the script language.

Thanks for taktuk’s ability, so I can use the logic for master and manage all the servers, so I just need to redirect the stdout to the master and I can get every detail of the slave’s status.

May be you will ask: Why bother? If you just need a job manager, Why not use Hadoop? Hadoop is very good at executing and manage jobs.

The answer is that hadoop or map reduce is not fit for crawling. Crawling is something like a recursive tool. Crawling start at a beginning point, and found more and more task from that, you don’t know how many times should that recur, but map reduce is not good at recursive operations.

I surely use hadoop, but just to handle the data that crawler has fetched, but as crawler, it is not useful.

At the end of this article, I would say, to run a cluster of crawler is very difficult, the logic of crawler is very complex if you want crawler won’t cost very much of your precious time. I’ll write how I wrote the crawler in another article.

Thanks for taktuk, without it, the work for my scripting tool can be more harder than writing the crawler. Life is hard for analysing massive data, but with a better tool, at least, your life will be easier.

The launch of Jersey

I was glad to announced that the launch of my JavaScript execution environment(called Jersey, in the name of JavaScript of Easy).

This begin with the thought on how to get a handy too to handle the tiny programming works I have to do very often, and has many foundation between(such as text manipulating, http site data fetch and processing, xml processing, html processing, sold data updating or data migration).

I’ll write an article to explain the this later.

The link for Jersey is https://github.com/guitarpoet/jersey and the README of it is below.

Jersey: A pluggable JavaScript execution environment

This is a pluggable JavaScript execution environment based on Rhino. It can run on any Java runtime above Java 5.

Jersey also contains a package manager called jpm, so you just need to provide an ivy module file, then you can get all the plugin and plugin dependencies using maven distribution.

Concept

Jersey is a running environment for JavaScript, it can run Mozilla Rhino flavoured JavaScript using Java. The aim for Jersey, is to provide a nice and pluggable environment to playwith or use JavaScript and based on the richi set of Java libraries.

The concept of Jersey is as list bellow:

  • Pluggable: Jersey is pluggable, you can install any plugin to it, and the plugin is just a java jar. All the plugins are deactivate at the start of the enviromnent, you can activate the plugins using Spring(Jersey provides you the functions to do that).
  • Simple: Jersey try to provide the api as simple as possible, and provide a console to play JavaScript with and will add the support of other script that compiles to JavaScript(for example, CoffeeScript) using the console, and the console support the tab completion, so you can fool around very easily
  • Configurable: Jersey provides the configuration basics based on Java’s properties, and all the configuration for application and plugins contains within the same configuration file
  • Discoverable: You can list all the provided function and provided modules using native functions.
  • Docable: Jersey provides the doc method for all the function and plugin modules, you can read the documentation of the function and modules using man function, and even have a pager function to view the pages.
  • Easy to be server: Since Java has many good embedded services, Jersey using Apache Derby to provide Database service and using Jetty to provide http server service, I also have a small CMS system written using the mvc and http plugin of Jersey
  • Easy to use: Jersey providing the pacakge management system based on the Famous Maven and Ivy, so you can download every plugin and plugin dependencies using Ivy’s module file.
  • Can support other languages: Since JavaScript is not a very simple language to write, Jersey also supports Coffee script(detected by the file extention) and using coffee to compile it and run, will support others in the future
  • Powerful: Jersey is based on Java, so it can take advantage of current Java and opensource Java libraries to provide powerful functions, it can use JDBC to access Database, using Commons DBCP to make the database connection pool, it can use Solr client to access the Apache Solr, it can use Log4J to handle the logging things and even using JBoss Drools to do the Rule matching.
  • Stable: Thanks to Java’s stablility and JavaScript’s simplicity, Jersey can be very stable for running, it can run very long time without memory leaks or make system unstable. Jersey’s multithreading is using Java’s multithreading support, and Jersey provides a JobManager function using Java’s concurrent library.

Architecture

Jersey is based on Java, Rhino and Spring, the component structure of Jersey is based on Spring, and the interpreter of JavaScript is using Rhino.

All the native functions and native modules is initiliazed at the intializing of the environment, all the plugins you want to use, need to be installed into the lib folder first, configure it correctly in the configuration file, then using require function to require the intialize script for this plugin.

Usage

Jersey is used using commandline, you can run the console using just jersey command. Or jersey a.js b.js c.js to run the script file one by one.

Jersey accept -c option to get the option file location (default is config.properties), and -s to get the script file(This only used for standard in, standard in file for Jersey is – )

Native Functions

Here is docs for some native functions that Jersey provided, you can list all the native functions using function functions().

  • require: Require the library using the resouce location, support file:file and classpath: protocol.
  • appContext: The function to load application context to javascript console
  • config: Read the configuration from system.properties or list all the properties
  • functions: List all the functions this shell provided.
  • modules:Prints all the modules that have loaded.

Native Modules

Here is the docs for some native modules that Jersey provided, you can lis all the native modules using function modules().

  • console: The console object.
  • file: The file utils.
  • csv: The csv utils service.
  • sutils: The string utils service.

Standard Libraries

Jersey provides some standard JavaScript libraries in the distribution. Here is the list of the libraries:

  • std/common: This provides the common extention of JavaScript, for example capitalize, isBlank, endWith for JavaScript and so on.
  • std/date: This provides date
  • std/evn.rhino: This provide env, to use library like jQuery
  • std/man: This provide the manual function for all the native function and modules
  • std/jquery: This provides the famous jQuery
  • std/underscore: This provides underscore

You can load your scripts using classpath protocol too.