使用CLI解析Java命令行参数

最近准备用Java写一个数据抽取的小玩样,不过距离我上一次用Java写程序已经过去了N(N>4)年了;没有读过<Java编程思想>,深深地觉得现在写出来的代码很不Java风格…..

因为需要在命令行中用到比较复杂的参数(argument),所以想到利用CLI库来解决这一块。

CLI库的Jar文件可以从Apache Commons下载到,目前比较成熟的是CLI 1.2版本。

要使用CLI,我们需要创建一个Options Class的实例对象:

Options Maclean=new Options();

通过该Options对象我们可以定义命令行程序可接受的参数(argument)。加入参数的一种方式是使用addOptions()方法:

JDUL.addOption("END" ,true,  "select the Big or Little Endian");

为命令行程序定义可接受参数后,还需要命令行解析器CommandLineParser进一步解析输入的参数:

BasicParser parser = new BasicParser();
CommandLine cl = parser.parse(JDUL, args);

下面是一段完整的命令行参数解析示例代码:

package par;

import org.apache.commons.cli.BasicParser;
import org.apache.commons.cli.Options;
import org.apache.commons.cli.CommandLine;
import org.apache.commons.cli.HelpFormatter;
import org.apache.commons.cli.ParseException;

public class Main {

    /**
     * @param args the command line arguments
     */
    public static void main(String[] args) {
      try {
            Options JDUL = new Options();

            JDUL.addOption("h"   ,false, "Print help for JDUL");
            JDUL.addOption("END" ,true,  "select the Big or Little Endian");
            JDUL.addOption("SSM" ,true,  "select MSSM or ASSM");

            BasicParser parser = new BasicParser();
            CommandLine cl = parser.parse(JDUL, args);

            if( cl.hasOption('h') ) {
                HelpFormatter f = new HelpFormatter();
               f.printHelp("OptionsTip", JDUL);
            }
            else{
                System.out.println(cl.getOptionValue("END"));
                System.out.println(cl.getOptionValue("SSM"));
            }
        }
        catch(ParseException e) {
            e.printStackTrace();
        }

    }
}

具体使用该命令行解析程序:

C:\Users\maclean>java -jar "C:\Users\maclean\Documents\NetBeansProjects\par\dist\par.jar" -h
usage: OptionsTip
 -END <arg>   select the Big or Little Endian
 -h           Print help for JDUL
 -SSM <arg>   select MSSM or ASSM

C:\Users\maclean>java -jar "C:\Users\maclean\Documents\NetBeansProjects\par\dist\par.jar" -END BIG -SSM AUTO
BIG
AUTO

[Repaste]The Underlying Technology of Facebook Messages

Facebook engineers have a new post on note portal as below:

We’re launching a new version of Messages today that combines chat, SMS, email, and Messages into a real-time conversation. The product team spent the last year building out a robust, scalable infrastructure. As we launch the product, we wanted to share some details about the technology.

The current Messages infrastructure handles over 350 million users sending over 15 billion person-to-person messages per month. Our chat service supports over 300 million users who send over 120 billion messages per month. By monitoring usage, two general data patterns emerged:

  1. A short set of temporal data that tends to be volatile
  2. An ever-growing set of data that rarely gets accessed

When we started investigating a replacement for the existing Messages infrastructure, we wanted to take an objective approach to storage for these two usage patterns. In 2008 we open-sourced Cassandra, an eventual-consistency key-value store that was already in production serving traffic for Inbox Search. Our Operations and Databases teams have extensive knowledge in managing and running MySQL, so switching off of either technology was a serious concern. We either had to move away from our investment in Cassandra or train our Operations teams to support a new, large system.

We spent a few weeks setting up a test framework to evaluate clusters of MySQL, Apache Cassandra, Apache HBase, and a couple of other systems. We ultimately chose HBase. MySQL proved to not handle the long tail of data well; as indexes and data sets grew large, performance suffered. We found Cassandra’s eventual consistency model to be a difficult pattern to reconcile for our new Messages infrastructure.

HBase comes with very good scalability and performance for this workload and a simpler consistency model than Cassandra. While we’ve done a lot of work on HBase itself over the past year, when we started we also found it to be the most feature rich in terms of our requirements (auto load balancing and failover, compression support, multiple shards per server, etc.). HDFS, the underlying filesystem used by HBase, provides several nice features such as replication, end-to-end checksums, and automatic rebalancing. Additionally, our technical teams already had a lot of development and operational expertise in HDFS from data processing with Hadoop. Since we started working on HBase, we’ve been focused on committing our changes back to HBase itself and working closely with the community. The open source release of HBase is what we’re running today.

Since Messages accepts data from many sources such as email and SMS, we decided to write an application server from scratch instead of using our generic Web infrastructure to handle all decision making for a user’s messages. It interfaces with a large number of other services: we store attachments in Haystack, wrote a user discovery service on top of Apache ZooKeeper, and talk to other infrastructure services for email account verification, friend relationships, privacy decisions, and delivery decisions (for example, should a message be sent over chat or SMS). We spent a lot of time making sure each of these services are reliable, robust, and performant enough to handle a real-time messaging system.

The new Messages will launch over 20 new infrastructure services to ensure you have a great product experience. We hope you enjoy using it.

Kannan is a software engineer at Facebook.

Facebook选择了使用Hbase来替代MYSQL或者Cassandra驱动他们目前的Messages应用;除去后来加上的一大堆应用,不知道扎克伯格当年自己写的代码还有多少在被使用:).

沪ICP备14014813号-2

沪公网安备 31010802001379号