4

I am a newbie when it comes to OpenStack with MAAS and AutoPilot. I would like to create my own private cloud with Ubuntu 14.04LTS and MAAS 1.9.

My goal is to have a decent set up I can use to deploy a pretty heavy Java Spring Tomcat application with MySQL, Solr, RabitMQ along with MongoDB and/or Couch in the mix for a separate service I need to write. The application sifts thru quite a bit of data and stores the analysis results for graphing (real-time and offline).

This application (minus the Couch service) works on a single Ubuntu machine at this point with 32GB (no cloud) and a 3rd Gen i7 500GB SSD, 2TB secondary HDD. This is my QA / small scale performance test environment only. I am building a home-sand-cloud to deploy this app:

I have 6 computers with the following settings:

  • 4Core Intel CPU with AMT technology.
  • 8GB RAM
  • 2 Gbit NICs
  • 1x240GB SSD
  • 1x1TB HDD

I also have 2 x D-Link 8-Port EasySmart Gigabit Ethernet Switch (DGS-1100-08) as well. I was trying to follow Dimiter's blog, though he had the Network architecture in mind without the second HDD.

Now my question is about the second disks. Would ceph/swift intelligently use the second disk for journalling or actual object storage. For my storage needs (less than 2 TB), would using HDD be a good idea as I cannot afford putting 1TB SSDs in these boxes. As the first disks are 240GB SSD in the boxes, would ceph/swift use the disks appropriately?

Looking forward to seeing your responses as I don't want to go thru the headache of deploying my app to find out I need a different topology altogether.

0xF2
  • 3,155
GunerE
  • 43

2 Answers2

2

The Autopilot will consume all disks it can find on each storage node that are not in use. Typically, for example, you get Ubuntu installed on /dev/sda, and any other /dev/sdX will be used by Ceph or Swift. There is no preference for SDD vs HDD in Autopilot's mind yet.

Now, MAAS 1.9 does support bcache, so you could speed things up a bit with that SSD of yours, but the 15.11 version of Autopilot does not yet know how to use that I'm afraid.

0xF2
  • 3,155
0

Purely from a Ceph perspective, you would want to place the journal on the primary SSD drive, taking a small share of it, and use the 1TB HDD for the OSD daemon's use.

As Andreas' answer explains, Autopilot does not do this automagically yet, so you would need to build out the hyperconverged OpenStack and Ceph cluster manually.

For performance, again from a Ceph point of view, at least 10 OSD nodes would be recommended, but I would suggest you also take a look at Red Hat's reference architecture for Ceph and MySQL -- you should be able to get a rough idea of what performance you can achieve with MySQL on Ceph, especially as you can see what hardware was used to accomplish those numbers.

0xF2
  • 3,155