Getting rid of monoliths: #lagom and microservice hooha

The road to ridding application monoliths is one wrought with many obstacles, mostly organisational. Whatever your reason behind this brave move, you have convinced yourself that you will fix this big ball of mud, somehow… You will no longer stand for this shanty town of a spaghetti code, riddled with nonsensical donkey poo. The code no longer speaks to you, it has surpassed the point of having a smell. This abomination has a cyclomatic “flavour” that you can almost taste it at the tip of the tongue. No amount of Ballmer’s peak is going to help. So you are going to make good on that promise to your linting tool, probably Sonar, for whom you have whispered hours of sweet nothings to the tunes of Bob Marley, “Don’t worry ‘bout a thing ’cause every little thing gonna be alright…” What once was a good idea has hideously mutated into a giant walking talking abomination that crawls under your skin and haunts your every step. Under the guise of some sound architectural decisions, you are going to pay down this technical debt, hard. You are going to to do microservice, and you are going to ship Docker containers. If this is you, read on.

The biggest modern day monoliths of them all, J2EE

Tasked with the daunting task of ripping apart functional applications into microservices, you will battle bikeshedders whilst debating your silly asses off with armies of monolith fandom. During what seems to be a very long coup d’état, this attempt to evangelise a newer leaner architecture and approach is riddled with skeptics. “What is frontend without JSF or JSP?! How dare you even question server-push technology like XYZFaces? Then how about the JNDI lookups and EJBs?! Surely you cannot replace these things?”

So six years ago, I had a very pleasant experience of pulling out Glassfish with embedded Jetty, replacing I-Hate-My-Faces with some simple JS (nowadays we call it the MEAN stack, or its many other permutations), and started building APIs with microservice principles. Turns out that is what Spotify did too at the time. So there you go haters. Bottom line, do not use J2EE no matter what if you care about having a competitive advantage. But if you need some good reasons, here are some:

  • Most J2EE containers are grounded on the notion of vertical scalability last I checked. Clustering should be idempotent and stateless, and scaling horizontal.
  • J2EE containers are not cloud native. Just look at their clustering! Unless you feel like having VPNs and private networks across different public clouds or data centers, you can probably just forget it.
  • So let’s put it behind the load balancer? No, most J2EE containers don’t do shared session persistence out of the box.
  • Let’s not kid ourselves, you will customise the crap out of this J2EE container; dropping war files upon war files to fix all its shortcomings.
  • Your sysadmins do not work with war/jar/ear files. They are *nix gurus deserving to be treated like one. Ship your product like an actual product, sir! Apt/yum/brew is your friend and please follow Filesystem Hierarchy Standard (FHS) for goodness sake.

Decide exactly what you are going to piecemeal

Have a very clear idea of the different categories or lifecycles of the applications you are going to transform. Timing is important, as with showing results. Afterall, this is the minimum viable product (MVP) approach naturally. Highly recommend avoiding system of records to begin with, those are definitely not quick wins and you will be in an arduous match, going for the full 5 rounds. Your future competitive advantage does not sit in your ERP or CRM systems. If it does, then um, yeah.

  • Isolate and separate clearly the functionality you are transforming or building
  • Ensure the isolation goes all the way down to infra, this is devops after all
  • Think of how to horizontally scale
  • Think of elasticity
  • Think of shared persistence across network

Java is dead, long live Java!

The trend a few years back was relentlessly hating on Java and Java devs. Evidence shows otherwise. Java is still around and it is going to be around for quite some time to come. No need to switch out it just yet even after the programming nuclear winter imposed by object-orientation and Java/.NET alike. Good code, good ecosystem based on tooling, and good solid design patterns go a long way, regardless of application domain or programming languages.

The truth is the same could have been said about node.js. I recall a number of years back, few colleagues quoted Hacker News regarding the state of node.js and how immature it was and Java was the “preferred” choice, even though the sysadmin community bagged the crap out of Java at the time. If you make strategic decisions predominantly based on the whims of HN, then you are just as plonkers as the next troll. What your node.js/Java boys and girls need to remember:

  • Repeatable, testable build pipelines. Think CI/CD.
  • Coding standards and linting, no brainers there
  • Packaging. Do the right thing, treat sysadmins as your end customers.
  • Separate out load balancing or clustering to other applications like nginx or haproxy. TCP stack makes more sense when written in C.
  • Lord forbid you try to do TLS termination in Java. This is really not cool bro. You got a number of other choices, so do not add this complexity to the landscape. There are no OpenSSL implementation in Java, and OpenSSL is already difficult to maintain as it is.
  • Good monitoring and logging practices goes a long way
  • Think network. Think TCP and HTTP.
  • Your JVM will live on top of a kernel. Know them. Tune that JVM and tune that kernel if needed.

So, microservices. Heard of #lagom?

So you have found a good set of business functional requirements to transform into a set of microservices. Heard of #lagom? Maybe Dropwizard or Springboot? The choices are probably all OK, and when doing microservices, there are simply no bad choices in my opinion. The gains outweigh the means here. The kicker is that there are probably a number of customised endpoint you will need to integrate with. This could be HTTP, something-else-over TCP or whatever. There could also be JPA or other nosql data stores you need to use. Pick your microservice framework component knowing that this is a framework and it can easily grow. The microservice strategy can easily bloat into “milliservice” or “services” (SOA?) if you are not careful. So just how do you stop the size of the code base from expanding? Keep distinct business functionalities as separate services and code bases. The sizing is up to you. Also, split up common functionalities into submodules. Both Dropwizard and Springboot has a bunch. Lagom for example has recently being introduced as the microservice framework for Java, it does have quite a lot of these connectors already in place. For me, I opted to homebrew our own microservice framework for maximum flexibility, ownership, and performance tuning.

Either way, armed with your chosen messiah of a framework, the idea here is to rain down free non-functional requirements across multiple projects and dev teams. Cost leadership for all!

  • Ease of hooking up to modern monitoring tools with a configurable metrics set. JVM memory, vmstat, iostat, CPU, JVM gc, etc etc
  • Ease of pulling out logs into say Influxdb or something.
  • Connectors to DBs should be submoduled and shared for future projects. Polyglot persistence ftw.
  • API documentation is super important. Do not assume your API users know your API, and make a point of doing backward compatibility
  • Follow semver.

Keeping your head above water is number one. Hang in there, the good days will come. Just remember, nothing is new in software since 1970, they just get modern marketing hype. And lastly, your fancy new microservice is legacy as soon as you launch. So please do consider the future generations. Let’s end this cycle of spaghetti monster code through old fashion craftsmanship.

Advertisements

Microservice API on Elastic Beanstalk with Jetty, Jersey, Guice, and Mongodb

This blog aims to outline how one can very easily ship microservice APIs on Elastic Beanstalk with Jetty, Jersey, Guice and Mongodb. This piece of work written in Java was an inspiration over a weekend, where I finally found a coherent red thread based on some of the work I did in the past. So if you are reading this, be fully aware that this stack is no longer cool and hip since the conception of RxJava and Akka 😀 http://blog.circleci.com/its-the-future/

Code lives here. https://github.com/yveshwang/api-example

Why Elastic Beanstalk

Platform as a service (PaaS) is rather nice. A little similar to Heroku, Elastic Beanstalk (EB) environment can be tailored to run Docker containers amongst other things. Where EB truly shine is that the platform is accessible and completely configurable as if it was IaaS. This is made possible by ebextensions. For example, each EB instance by default ship with a local Nginx instance as designed by AWS. If you really want to switch it out, in theory you can do so by hijacking the initialisation process through ebextensions, uninstall Nginx, and install something like Varnish instead 😀 Yep, you can well and truly extend and tweak the platform to your heart’s content and potentially break everything.

Shipping to Elastic Beanstalk is fairly straight forward as it mostly involves a Dockerfile or a Docker image hosted in a public or private docker.io repository. If you have a Dockerfile or a Docker image, and a bit savvy with the eb cli tool, you are sorted and ready to deploy to Elastic Beanstalk.

Having a Docker container also makes testing repeatable and standardised. See the previous build pipeline blog series. It is by intention that the infrastructure part of the setting up eb is left out of this post for now as I believe Elastic Beanstalk deserves a blog post of its own. So let’s keep it old school and talk UML and code a little bit in this blog post.

Why Jetty

Jetty is a lightweight container as a naive attempt to defeat the monolithic J2EE containers because let’s be honest, most of us do not use half the functionalities in J2EE and the clustering of these J2EE containers goes against the very principle of microservices in my opinion. Instead, we should adhere to HTTP and RESTful API everything! Note that Jetty is most certainly cool, but it is not RxNetty or Akka-http cool.

Why Guice

Inversion of control is neat. On a grander scale, Guice can be used to inject mock layers en mass. For example, using Mockito to configure an entire mock data access layer and injecting that in context of unit or integration testing thereby allowing more tests to be written with less dependencies. Guice is also a nice way to help address the separation of concerns by keeping configuration away from business logic. Lastly, being able to do @Inject anywhere is powerful and allows us to construct a lot of templates and basically scale out horizontally through scaffolding code. When used properly, this is the little unsung hero of the Java world in my opinion.

Why Mongodb

Expect endless devops discussion on this very topic. Ever since the hacker news trolls came out of the woodworks against 10gen, the discussion has never ended. I like Mongo. I like it because it is fast to bang out a prototype.

DBs can vastly differ in ACID properties and thus address different combinations of CAP. I think I will save my opinion on Mongodb for another blog post another time. For now, Morphia is nice to work with in Java.

Why Jersey

Jersey is a pretty well structured way to write RESTful endpoints.

Putting it all together

Busting out some sick UML skills here.

Class diagram for api-example

Class diagram for api-example

Some basic principle by convention are as follows:

  • Each entity lives in its own collection in Mongodb
  • Each entity has one data access object (DAO)
  • Facade pattern is applied and should only contain business logic (no DB related operations)
  • Each DAO then at the very least as its own facade that can be extended to support business logic
  • You can freely inject other facades into one another.
  • Each facade maps to one HTTP resource supporting typical CRUD routines for that entity’s
    RESTful interfaces, GET, PUT, POST, DELETE and PATCH (ha!)
  • Caching headers, ETag, IMF headers can live in filters
  • Basic auth is also supported here as an example, that should live in filters too

Benefits

  • Loose coupling between the layers. You can replace Mongo quickly by just replacing the DAO implementations.
  • Most code is scaffolding or pure business logic. All connector code and basic CRUD support, including PATCH, lives in its respective base classes for entity, DAO, facade to resource layer that can be easily extended and reused.
  • Easy to test. All layers are tested, all entities are tested. And the test code can be easily extended
  • You can ship this code to an offshore team and expect that they can easily create new entities and new HTTP endpoints in a short time by simply copypasta some scaffolding code, reuse some templated CRUD classes, follow the basic CRUD routines, and pump out some basic business logic 🙂 Good times!
  • If you actually have a good team to work with, then this stack is very easy to extend by simply following the Facade pattern. Build cool stuff like your own in-memory RRDs for statistics then inject that statistics to other business logic!
  • Clustering is easy because this stack speaks HTTP and you would simply need a load balancer and some minor (or major?) Mongodb config.

Cons

  • At the core, Facade pattern is used liberally. This is not an event driven or reactive approach at all. When using facade, think shared memory, which means threading and parallelism will require due diligence. This is one reason I believe an event-driven, message based approach would improve the Facade pattern.
  • The stack compiles against JDK7. It would work fine with JDK8.
  • Not reactive, and by definition, not hipster enough.

Migrating to Gradle: From Ant+Ivy emo to Gradle hipster

A deep seeded desire to test out Gradle, and to write a blog post littered with pop references, has been brewing for a long time. I recall having a spirited conversation with a fellow Java developer from 10gen during the speakers’ dinner in Javazone ‘13 regarding the pains and tribulations of migrating from Ant to Gradle. Talks awash with Gradle enthusiasms and pleasantries, dredging through fire and brimstone maintaining Ant build scripts, and the verbose perversion that is Maven. This was exactly a year ago.

At the time, our build scaffolding that was prevalent throughout the modules and projects was Ant, backed by a home-baked top level paraent XML files that contains a bunch of common library definitions and tasks. We had things like build-commons.xml, junit.xml and findbugs.xml etc floating around all over shop. Yes, even built artifacts were checked into the repo (not until Ivy was adopted). Like a kid who got caught stealing candy, this was some shameful stuff, but unabashed at the same time, it got the job done. XML for build scripts is like sculpting a statuette with a chainsaw whilst listening to Fallout Boys; a sad emo affair it was.

Readable code == better maintenance

Maintenance was the big factor to consider. Ant build script was growing large, encapsulating everything from executing various levels of testing, static code analysis, docs generation and bundling of certain jars etc etc. The list just seems to go on and on. Changes often break the build scripts and a lot of things are still hardcoded, despite having this build-common.xml on a parent level. Trying to explain the build process to someone was also extremely difficult. The silver lining here is that it is still not nearly as bad as the notorious Netbeans generated Ant build scaffolding.

Gradle simplifies the tasks, without the verbosity and rigidness of Maven. The flow of the build process for Gradle is govern usually by the plugins and it is super easy to extend and customise. Since we are all developers at heart here, reading lines of code instead of hundreds of lines of XMLs is a lot easier to swallow.

DSL > XML

Groovy is, well, groovy! Pardon the pun, but nothing is more clever than being able to spout random functional programming anecdotes in a company surrounded by C programming dinosaurs. One does sound a bit more hoity toity when using words like curring, closure, map reduce, and having debates on whether curring is monadic. I mean pointers and kernels just don’t cut the mustard in hipster convo these days.

Getting out of dependency hell

The highway out of dependency hell is Gradle. The reason being is the built-in dependencies and insight command. After working on NPM and Bower for over a year, being able to specify the type of dependencies (compile, runtime, dev etc) is paramount to getting on top of the dependencies dump in the first place. Previously with Ant, multiple classpaths had to be defined carefully, all of which ends up messing up our IDE.

You will also be well equipped to sort stuff out when logback, slf4j and log4j decides to not play nice together. Oh yes some obscure version of the three, when combined, will throw class loading exceptions. It is like an unholy trinity, and believe me, you want the Gradle gods on your side then to find the culprit.

Awesome plugins

Nuff said.

In the end…

Happy to say that after a year of wallowing in despair and ignominy, the build scripts and build pipeline has gotten some major love and finally migrated to Gradle. A happy hipster day indeed.

Thnks fr th mmrs Ant! Now go pop some tags, you got Gradle in your pocket, hunting and looking for a come up, you know everything with Gradle is fucking awesome.

dotfiles and Java ninjutsu

All work mentioned below are open and done by true ninjas heralding the dark embrace of shells and terminals. I am but a mere follower of the art. This blog post acts as a harrowing reminder that one should constantly strive for ninpo perfection.

My dotfiles project, https://github.com/yveshwang/dotfiles is very much set to my personal liking.They are but a tiny subset of, and based on, https://github.com/mathiasbynens/dotfiles/ and https://github.com/paulirish/dotfiles.

In addition to some git and vim tweeks, the added bonus here is that you can switch Java version by doing the following command, setjdk, similar to that of update-java-alternatives command in Ubuntu. This is brilliant!

setjdk magic!

setjdk magic!

This sterling little pearler lives in the .extra files as

# switch JDK version for maverick
# http://www.jayway.com/2014/01/15/how-to-switch-jdk-version-on-mac-os-x-maverick/
function setjdk() {  
  if [ $# -ne 0 ]; then  
   removeFromPath '/System/Library/Frameworks/JavaVM.framework/Home/bin'  
   if [ -n "${JAVA_HOME+x}" ]; then  
    removeFromPath $JAVA_HOME  
   fi  
   export JAVA_HOME=`/usr/libexec/java_home -v $@`  
   export PATH=$JAVA_HOME/bin:$PATH  
  fi  
 }  
 function removeFromPath() {  
  export PATH=$(echo $PATH | sed -E -e "s;:$1;;" -e "s;$1:?;;")  
 }
setjdk 1.7

edit 03.04.2014: github loves dotfiles! http://dotfiles.github.io/

A newbie’s guide to Setting up a Java dev env on a mac

As a newly baptised MacBook Pro user, I was keen on getting started with Java development.

As a precursor, I have been a Java developer since 2003 and have used a MS environment all this time. I am an avid user of Linux but by no means an expert. This amounts to quite a lot of fiddling about when setting up a BSD environment for proficient software development. This Java Developer’s Environment guide will hopefully address my initial barrier and help other newbies there.

My dev env targets Java development with emphasis on following software development methodologies. The latter means that an UML, or any other graphical design language for that matter, ranks fairly high on my software design and implementation processes. My daily work evolves around web development, so setting up web servers or web containers are essential.

This guide targets Mac OSX 10.6.4.

1. Install XCode

XCode is the developer environment for Macs. It includes GNU packages like gcc, gdb and etc. Sign up to Apple and get XCode. This will install the essentials for iPhone development, ObjectC, libraries and IDEs.

http://developer.apple.com/

2. Install MacPorts

MacPorts is essential for installing additional packages like MySQL, Apache and etc onto a Mac. This application is dependent on XCode. As the name suggests, it is a ported version of these packages and it behaves similarly to apt-get.

MacPorts install applications in the /opt/local/ directory. It separates itself from the core BSD directory structures thus ensuring that packages installed using MacPorts do not overlap with existing executables.

http://www.macports.org/

Easiest way is to download the dmg package and then run the folowing command to update MacPorts to the latest version.

sudo port -v selfupdate

3. Install Source Control

I use SVN and Git for various different projects. I would recommend setting up both. Naturally CVS would be nice but it seems a bit out of fashion these days. For new projects, perhaps consider Git for the distributed source control.

svnX is a nice free GUI for SVN on a Mac. http://code.google.com/p/svnx/

Gitty is a free Git implementation for Macs. http://macendeavor.com/gity

4. Install IDE

Naturally this is entirely up to you. I switch between Netbeans and Eclipse with the latter being my main preference.

Netbeans 6.9.1 – http://netbeans.org/downloads/
Eclipse Hellios 3.6 – http://www.eclipse.org/downloads/

For ease of development, install plugins for SVN and Git on your IDE of choice. In addition, I usually install GWT plugins, Glassfish and other supporting packages that helps improve efficiency.

5.  Install Apache

The process of installing Apache is rather simple with MacPorts. However configuring it may require some finesse and knowledge. The steps are derived from Technie Corner [1] and MacNN [2]. Please note that Apache comes installed so feel free to use it, but feel free to install your very own using Apache. The following is from MaccNN [2]:

To install the latest version of Apache 2, type the following:

sudo port install apache2

When that is done, you have to create a conf file (which you can edit to configure Apache 2):

sudo cp /opt/local/apache2/conf/httpd.conf.sample /opt/local/apache2/conf/httpd.conf

5.1. Preserve Old Apache Directories

Please perform the following steps if you wish for your “Sites” directories (i.e. /Users/user/Sites) to be accessed viahttp://your-server/~user, as is the case with Apache 1.3).

Open /opt/local/apache2/conf/httpd.conf, and change:
DocumentRoot "/opt/local/apache2/htdocs"
to
DocumentRoot "/Library/WebServer/Documents"

and change
<Directory “/opt/local/apache2/htdocs”>
to
<Directory "/Library/WebServer/Documents">

Then, uncomment this line:

Include conf/extra/httpd-userdir.conf

5.2. Some Apache 2 extras

You may also want to uncomment the following lines in /opt/local/apache2/conf/httpd.conf:

Include conf/extra/httpd-autoindex.conf (Fancy directory listing)
Include conf/extra/httpd-default.conf (Some default settings)

5.3. Make Apache 2 access a little easier

Warning: Remember that OS X has a preinstalled version of Apache (1.3), which can turned on and off via “Personal Web Sharing” in Preferences, or via the “apachectl” command. The command points to that version of Apache, and not the one we are installing!

To make starting and stopping Apache 2 a little easier, we will construct an alias to Apache 2’s apachectl. Add the following line to ~/.profile (Panther/Tiger) and restart the terminal to gain access to it:

alias apache2ctl='sudo /opt/local/apache2/bin/apachectl'

5.4. Start Apache 2 on reboot

Warning: When I perform the command below, I eventually get the following error: Workaround Bonjour: Unknown error: 0. I’ve read that this is harmless, but I really don’t know for sure. I assume that everything ends up in proper order after a reboot.

To start Apache 2 now and whenever the system reboots, type the following:

sudo launchctl load -w /Library/LaunchDaemons/org.macports.apache2.plist

5.5. Notes

We added “sudo” to the alias apache2ctl, so if prompted for a password, enter your root password.

To run Apache 2: apache2ctl start
To stop Apache 2: apache2ctl stop

6. Install MySQL

MySQL is installed using MacPorts. The following is from MaccNN [2]:

Note: MySQL has a very nice binary installation of MySQL, so feel free to go ahead and use that instead of a port. I, however, like having my entire system (Apache 2, PHP 5, MySQL 5) installed together in the same fashion.

To install MySQL 5, we will first get the port. If you want MySQL 5 to run automatically after system startup, perform the following commands:

Warning: When I perform the launchctl command below, I eventually get the following error: Workaround Bonjour: Unknown error: 0. I’ve read that this is harmless, but I really don’t know for sure. I assume that everything ends up in proper order after a reboot.

sudo port install mysql5 +server
sudo launchctl load -w /Library/LaunchDaemons/org.macports.mysql5.plist

Otherwise, run the following:

sudo port install mysql5

6.1. Make MySQL 5 access a little easier

When done, you may want to add the following lines to ~/.profile (Panther/Tiger) and restart the Terminal to gain access to them:

Warning: You will need to type sudo -v before using mysqlstart, to avoid being prompted for a password in the background (thus preventing MySQL from starting).

alias mysqlstart='sudo mysqld_safe5 &'
alias mysqlstop='mysqladmin5 -u root -p shutdown'

The “-p” assumes that you will be using a password for root, which you of course should be.

6.2. Set up DB

Then, run the following to set up the database:

sudo -u mysql mysql_install_db5

You will now be able to use MySQL 5 (and you can use mysqlstart and mysqlstop for convenience).

I also recommend immediately adding a password to your root account:

mysqladmin -u root password [yourpw]

7. Install PHP

Again, using MacPort, PHP is supported with the aforementioned Apache. The following is from MaccNN [2]:

(Make sure Apache 2 is stopped before proceeding.)

To install the latest version of PHP 5, type either:

sudo port install php5 +apache2 +mysql5 +pear (With PEAR) or sudo port install php5 +apache2 +mysql5 (Without PEAR)

When that is done, register PHP 5 with Apache 2:

cd /opt/local/apache2/modules
sudo /opt/local/apache2/bin/apxs -a -e -n "php5" libphp5.so

And create a php.ini file (which you can edit to configure PHP 5):

cp /opt/local/etc/php.ini-dist /opt/local/etc/php.ini

Then open /opt/local/apache2/conf/httpd.conf again, and add this line:

Include conf/extras-conf/*.conf

You will probably want to register index.php with Apache 2 as a directory index page (automatically loaded as the directory index). If you would like to do this, also change:

DirectoryIndex index.html to DirectoryIndex index.html index.php

8. Install phpMyAdmin

This package is really easy to use for manipulating and managing MySQL databases. The following is from MaccNN [2]:

The port is not updated regularly, and installation is ridiculously easy for phpMyAdmin, so download directly from

http://www.phpmyadmin.net/

Next, untar the downloaded file. Rename the directory to “phpMyAdmin” and place it in your document root (/Library/WebServer/Documents if you have preserved the old Apache directories).

Create a php file called config.inc.php and place it in the root of the phpMyAdmin folder. This file will overwrite settings defined by phpMyAdmin/libraries/config.default.php. The following is my config.inc.php file. Some of the lines are not completely necessary, but I like readability and consistency:


<?php
$cfg['blowfish_secret'] = ‘[A passphrase for internal use]‘;
$i = 0; $i++;
$cfg['Servers'][$i]['host'] = ‘localhost’;
$cfg['Servers'][$i]['connect_type'] = ‘tcp’;
$cfg['Servers'][$i]['extension'] = ‘mysqli’;
$cfg['Servers'][$i]['auth_type'] = ‘cookie’;
$cfg['Servers'][$i]['user'] = ”;
$cfg['Servers'][$i]['password'] = ”;
?>

That’s it! You should now be able to access it at http://your-server/phpMyAdmin. You can log in as root, add more users, create databases, etc.

9. Installing Enterprise Architect

Though this is entirely debatable, but enterprise UML tools are often only Windows based. Its nice to have StarUML or ArgoUML, but it really lacks the project maangement and requirement analysis functionalities. My preference is Enterprise Architect from Sparx Systems.

In order to run Windows applications correctly on Mac, we need a virtualisation software. Parellel and VMWare is fine but I believe it requires Windows installations, which can be costly. I recommend the following:

Winebottler – http://winebottler.kronenberg.org/
CrossOver – http://www.codeweavers.com/products/

Winebottler is free and is supported by the open-source community. CrossOver is proprietary. Currently I prefer CrossOver due to some display bug that Winebottler has when running Enterprise Architect.

10. That’s it!

Thats pretty much it to get coding right off the bat.

Reference

[1] Techie Corner, available at http://www.techiecorner.com/174/how-to-install-apache-php-mysql-with-macport-in-mac-os-x/

[2] MacNN Forum, available at http://forums.macnn.com/79/developer-center/322362/tutorial-installing-apache-2-php-5-a/