Alternatives to Bazel logo

Alternatives to Bazel

Pants, Webpack, Ansible, Buck, and CMake are the most popular alternatives and competitors to Bazel.
301
565
+ 1
133

What is Bazel and what are its top alternatives?

Bazel is a build tool that builds code quickly and reliably. It is used to build the majority of Google's software, and thus it has been designed to handle build problems present in Google's development environment.
Bazel is a tool in the Java Build Tools category of a tech stack.
Bazel is an open source tool with GitHub stars and GitHub forks. Here’s a link to Bazel's open source repository on GitHub

Top Alternatives to Bazel

  • Pants
    Pants

    Pants is a build system for Java, Scala and Python. It works particularly well for a source code repository that contains many distinct projects. ...

  • Webpack
    Webpack

    A bundler for javascript and friends. Packs many modules into a few bundled assets. Code Splitting allows to load parts for the application on demand. Through "loaders" modules can be CommonJs, AMD, ES6 modules, CSS, Images, JSON, Coffeescript, LESS, ... and your custom stuff. ...

  • Ansible
    Ansible

    Ansible is an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates. Ansible’s goals are foremost those of simplicity and maximum ease of use. ...

  • Buck
    Buck

    Buck encourages the creation of small, reusable modules consisting of code and resources, and supports a variety of languages on many platforms. ...

  • CMake
    CMake

    It is used to control the software compilation process using simple platform and compiler independent configuration files, and generate native makefiles and workspaces that can be used in the compiler environment of the user's choice. ...

  • Jenkins
    Jenkins

    In a nutshell Jenkins CI is the leading open-source continuous integration server. Built with Java, it provides over 300 plugins to support building and testing virtually any project. ...

  • Apache Maven
    Apache Maven

    Maven allows a project to build using its project object model (POM) and a set of plugins that are shared by all projects using Maven, providing a uniform build system. Once you familiarize yourself with how one Maven project builds you automatically know how all Maven projects build saving you immense amounts of time when trying to navigate many projects. ...

  • Gradle
    Gradle

    Gradle is a build tool with a focus on build automation and support for multi-language development. If you are building, testing, publishing, and deploying software on any platform, Gradle offers a flexible model that can support the entire development lifecycle from compiling and packaging code to publishing web sites. ...

Bazel alternatives & related posts

Pants logo

Pants

24
85
30
Build system by Twitter, Foursquare, and Square
24
85
+ 1
30
PROS OF PANTS
  • 6
    Creates deployable packages
  • 4
    Runs on Linux
  • 4
    Runs on OS X
  • 4
    BUILD files
  • 4
    Runs tests
  • 4
    Scales
  • 2
    Flexibility
  • 2
    Extensible
CONS OF PANTS
    Be the first to leave a con

    related Pants posts

    Webpack logo

    Webpack

    40K
    27.1K
    752
    A bundler for javascript and friends
    40K
    27.1K
    + 1
    752
    PROS OF WEBPACK
    • 309
      Most powerful bundler
    • 182
      Built-in dev server with livereload
    • 142
      Can handle all types of assets
    • 87
      Easy configuration
    • 22
      Laravel-mix
    • 4
      Overengineered, Underdeveloped
    • 2
      Makes it easy to bundle static assets
    • 2
      Webpack-Encore
    • 1
      Redundant
    • 1
      Better support in Browser Dev-Tools
    CONS OF WEBPACK
    • 15
      Hard to configure
    • 5
      No clear direction
    • 2
      Spaghetti-Code out of the box
    • 2
      SystemJS integration is quite lackluster
    • 2
      Loader architecture is quite a mess (unreliable/buggy)
    • 2
      Fire and Forget mentality of Core-Developers

    related Webpack posts

    Russel Werner
    Lead Engineer at StackShare · | 32 upvotes · 2M views

    StackShare Feed is built entirely with React, Glamorous, and Apollo. One of our objectives with the public launch of the Feed was to enable a Server-side rendered (SSR) experience for our organic search traffic. When you visit the StackShare Feed, and you aren't logged in, you are delivered the Trending feed experience. We use an in-house Node.js rendering microservice to generate this HTML. This microservice needs to run and serve requests independent of our Rails web app. Up until recently, we had a mono-repo with our Rails and React code living happily together and all served from the same web process. In order to deploy our SSR app into a Heroku environment, we needed to split out our front-end application into a separate repo in GitHub. The driving factor in this decision was mostly due to limitations imposed by Heroku specifically with how processes can't communicate with each other. A new SSR app was created in Heroku and linked directly to the frontend repo so it stays in-sync with changes.

    Related to this, we need a way to "deploy" our frontend changes to various server environments without building & releasing the entire Ruby application. We built a hybrid Amazon S3 Amazon CloudFront solution to host our Webpack bundles. A new CircleCI script builds the bundles and uploads them to S3. The final step in our rollout is to update some keys in Redis so our Rails app knows which bundles to serve. The result of these efforts were significant. Our frontend team now moves independently of our backend team, our build & release process takes only a few minutes, we are now using an edge CDN to serve JS assets, and we have pre-rendered React pages!

    #StackDecisionsLaunch #SSR #Microservices #FrontEndRepoSplit

    See more
    Jonathan Pugh
    Software Engineer / Project Manager / Technical Architect · | 25 upvotes · 2.9M views

    I needed to choose a full stack of tools for cross platform mobile application design & development. After much research and trying different tools, these are what I came up with that work for me today:

    For the client coding I chose Framework7 because of its performance, easy learning curve, and very well designed, beautiful UI widgets. I think it's perfect for solo development or small teams. I didn't like React Native. It felt heavy to me and rigid. Framework7 allows the use of #CSS3, which I think is the best technology to come out of the #WWW movement. No other tech has been able to allow designers and developers to develop such flexible, high performance, customisable user interface elements that are highly responsive and hardware accelerated before. Now #CSS3 includes variables and flexboxes it is truly a powerful language and there is no longer a need for preprocessors such as #SCSS / #Sass / #less. React Native contains a very limited interpretation of #CSS3 which I found very frustrating after using #CSS3 for some years already and knowing its powerful features. The other very nice feature of Framework7 is that you can even build for the browser if you want your app to be available for desktop web browsers. The latest release also includes the ability to build for #Electron so you can have MacOS, Windows and Linux desktop apps. This is not possible with React Native yet.

    Framework7 runs on top of Apache Cordova. Cordova and webviews have been slated as being slow in the past. Having a game developer background I found the tweeks to make it run as smooth as silk. One of those tweeks is to use WKWebView. Another important one was using srcset on images.

    I use #Template7 for the for the templating system which is a no-nonsense mobile-centric #HandleBars style extensible templating system. It's easy to write custom helpers for, is fast and has a small footprint. I'm not forced into a new paradigm or learning some new syntax. It operates with standard JavaScript, HTML5 and CSS 3. It's written by the developer of Framework7 and so dovetails with it as expected.

    I configured TypeScript to work with the latest version of Framework7. I consider TypeScript to be one of the best creations to come out of Microsoft in some time. They must have an amazing team working on it. It's very powerful and flexible. It helps you catch a lot of bugs and also provides code completion in supporting IDEs. So for my IDE I use Visual Studio Code which is a blazingly fast and silky smooth editor that integrates seamlessly with TypeScript for the ultimate type checking setup (both products are produced by Microsoft).

    I use Webpack and Babel to compile the JavaScript. TypeScript can compile to JavaScript directly but Babel offers a few more options and polyfills so you can use the latest (and even prerelease) JavaScript features today and compile to be backwards compatible with virtually any browser. My favorite recent addition is "optional chaining" which greatly simplifies and increases readability of a number of sections of my code dealing with getting and setting data in nested objects.

    I use some Ruby scripts to process images with ImageMagick and pngquant to optimise for size and even auto insert responsive image code into the HTML5. Ruby is the ultimate cross platform scripting language. Even as your scripts become large, Ruby allows you to refactor your code easily and make it Object Oriented if necessary. I find it the quickest and easiest way to maintain certain aspects of my build process.

    For the user interface design and prototyping I use Figma. Figma has an almost identical user interface to #Sketch but has the added advantage of being cross platform (MacOS and Windows). Its real-time collaboration features are outstanding and I use them a often as I work mostly on remote projects. Clients can collaborate in real-time and see changes I make as I make them. The clickable prototyping features in Figma are also very well designed and mean I can send clickable prototypes to clients to try user interface updates as they are made and get immediate feedback. I'm currently also evaluating the latest version of #AdobeXD as an alternative to Figma as it has the very cool auto-animate feature. It doesn't have real-time collaboration yet, but I heard it is proposed for 2019.

    For the UI icons I use Font Awesome Pro. They have the largest selection and best looking icons you can find on the internet with several variations in styles so you can find most of the icons you want for standard projects.

    For the backend I was using the #GraphCool Framework. As I later found out, #GraphQL still has some way to go in order to provide the full power of a mature graph query language so later in my project I ripped out #GraphCool and replaced it with CouchDB and Pouchdb. Primarily so I could provide good offline app support. CouchDB with Pouchdb is very flexible and efficient combination and overcomes some of the restrictions I found in #GraphQL and hence #GraphCool also. The most impressive and important feature of CouchDB is its replication. You can configure it in various ways for backups, fault tolerance, caching or conditional merging of databases. CouchDB and Pouchdb even supports storing, retrieving and serving binary or image data or other mime types. This removes a level of complexity usually present in database implementations where binary or image data is usually referenced through an #HTML5 link. With CouchDB and Pouchdb apps can operate offline and sync later, very efficiently, when the network connection is good.

    I use PhoneGap when testing the app. It auto-reloads your app when its code is changed and you can also install it on Android phones to preview your app instantly. iOS is a bit more tricky cause of Apple's policies so it's not available on the App Store, but you can build it and install it yourself to your device.

    So that's my latest mobile stack. What tools do you use? Have you tried these ones?

    See more
    Ansible logo

    Ansible

    18.8K
    15.2K
    1.3K
    Radically simple configuration-management, application deployment, task-execution, and multi-node orchestration engine
    18.8K
    15.2K
    + 1
    1.3K
    PROS OF ANSIBLE
    • 284
      Agentless
    • 210
      Great configuration
    • 199
      Simple
    • 176
      Powerful
    • 155
      Easy to learn
    • 69
      Flexible
    • 55
      Doesn't get in the way of getting s--- done
    • 35
      Makes sense
    • 30
      Super efficient and flexible
    • 27
      Powerful
    • 11
      Dynamic Inventory
    • 9
      Backed by Red Hat
    • 7
      Works with AWS
    • 6
      Cloud Oriented
    • 6
      Easy to maintain
    • 4
      Vagrant provisioner
    • 4
      Simple and powerful
    • 4
      Multi language
    • 4
      Simple
    • 4
      Because SSH
    • 4
      Procedural or declarative, or both
    • 4
      Easy
    • 3
      Consistency
    • 2
      Well-documented
    • 2
      Masterless
    • 2
      Debugging is simple
    • 2
      Merge hash to get final configuration similar to hiera
    • 2
      Fast as hell
    • 1
      Manage any OS
    • 1
      Work on windows, but difficult to manage
    • 1
      Certified Content
    CONS OF ANSIBLE
    • 8
      Dangerous
    • 5
      Hard to install
    • 3
      Doesn't Run on Windows
    • 3
      Bloated
    • 3
      Backward compatibility
    • 2
      No immutable infrastructure

    related Ansible posts

    Tymoteusz Paul
    Devops guy at X20X Development LTD · | 23 upvotes · 8M views

    Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).

    It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up or vagrant reload we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.

    I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.

    We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.

    If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.

    The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).

    Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.

    See more
    Sebastian Gębski

    Heroku was a decent choice to start a business, but at some point our platform was too big, too complex & too heterogenic, so Heroku started to be a constraint, not a benefit. First, we've started containerizing our apps with Docker to eliminate "works in my machine" syndrome & uniformize the environment setup. The first orchestration was composed with Docker Compose , but at some point it made sense to move it to Kubernetes. Fortunately, we've made a very good technical decision when starting our work with containers - all the container configuration & provisions HAD (since the beginning) to be done in code (Infrastructure as Code) - we've used Terraform & Ansible for that (correspondingly). This general trend of containerisation was accompanied by another, parallel & equally big project: migrating environments from Heroku to AWS: using Amazon EC2 , Amazon EKS, Amazon S3 & Amazon RDS.

    See more
    Buck logo

    Buck

    27
    146
    8
    A build system developed and used by Facebook
    27
    146
    + 1
    8
    PROS OF BUCK
    • 4
      Fast
    • 1
      Java
    • 1
      Facebook
    • 1
      Runs on OSX
    • 1
      Windows Support
    CONS OF BUCK
    • 2
      Lack of Documentation
    • 1
      Learning Curve

    related Buck posts

    CMake logo

    CMake

    3.6K
    287
    1
    An open-source system that manages the build process
    3.6K
    287
    + 1
    1
    PROS OF CMAKE
    • 1
      Has package registry
    CONS OF CMAKE
      Be the first to leave a con

      related CMake posts

      Jenkins logo

      Jenkins

      57.6K
      49.1K
      2.2K
      An extendable open source continuous integration server
      57.6K
      49.1K
      + 1
      2.2K
      PROS OF JENKINS
      • 523
        Hosted internally
      • 469
        Free open source
      • 318
        Great to build, deploy or launch anything async
      • 243
        Tons of integrations
      • 211
        Rich set of plugins with good documentation
      • 111
        Has support for build pipelines
      • 68
        Easy setup
      • 66
        It is open-source
      • 53
        Workflow plugin
      • 13
        Configuration as code
      • 12
        Very powerful tool
      • 11
        Many Plugins
      • 10
        Continuous Integration
      • 10
        Great flexibility
      • 9
        Git and Maven integration is better
      • 8
        100% free and open source
      • 7
        Slack Integration (plugin)
      • 7
        Github integration
      • 6
        Self-hosted GitLab Integration (plugin)
      • 6
        Easy customisation
      • 5
        Pipeline API
      • 5
        Docker support
      • 4
        Fast builds
      • 4
        Hosted Externally
      • 4
        Excellent docker integration
      • 4
        Platform idnependency
      • 3
        AWS Integration
      • 3
        JOBDSL
      • 3
        It's Everywhere
      • 3
        Customizable
      • 3
        Can be run as a Docker container
      • 3
        It`w worked
      • 2
        Loose Coupling
      • 2
        NodeJS Support
      • 2
        Build PR Branch Only
      • 2
        Easily extendable with seamless integration
      • 2
        PHP Support
      • 2
        Ruby/Rails Support
      • 2
        Universal controller
      CONS OF JENKINS
      • 13
        Workarounds needed for basic requirements
      • 10
        Groovy with cumbersome syntax
      • 8
        Plugins compatibility issues
      • 7
        Lack of support
      • 7
        Limited abilities with declarative pipelines
      • 5
        No YAML syntax
      • 4
        Too tied to plugins versions

      related Jenkins posts

      Tymoteusz Paul
      Devops guy at X20X Development LTD · | 23 upvotes · 8M views

      Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).

      It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up or vagrant reload we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.

      I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.

      We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.

      If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.

      The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).

      Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.

      See more
      Thierry Schellenbach

      Releasing new versions of our services is done by Travis CI. Travis first runs our test suite. Once it passes, it publishes a new release binary to GitHub.

      Common tasks such as installing dependencies for the Go project, or building a binary are automated using plain old Makefiles. (We know, crazy old school, right?) Our binaries are compressed using UPX.

      Travis has come a long way over the past years. I used to prefer Jenkins in some cases since it was easier to debug broken builds. With the addition of the aptly named “debug build” button, Travis is now the clear winner. It’s easy to use and free for open source, with no need to maintain anything.

      #ContinuousIntegration #CodeCollaborationVersionControl

      See more
      Apache Maven logo

      Apache Maven

      2.8K
      1.7K
      414
      Apache build manager for Java projects.
      2.8K
      1.7K
      + 1
      414
      PROS OF APACHE MAVEN
      • 138
        Dependency management
      • 70
        Necessary evil
      • 60
        I’d rather code my app, not my build
      • 48
        Publishing packaged artifacts
      • 43
        Convention over configuration
      • 18
        Modularisation
      • 11
        Consistency across builds
      • 6
        Prevents overengineering using scripting
      • 4
        Runs Tests
      • 4
        Lot of cool plugins
      • 3
        Extensible
      • 2
        Hard to customize
      • 2
        Runs on Linux
      • 1
        Runs on OS X
      • 1
        Slow incremental build
      • 1
        Inconsistent buillds
      • 1
        Undeterminisc
      • 1
        Good IDE tooling
      CONS OF APACHE MAVEN
      • 6
        Complex
      • 1
        Inconsistent buillds
      • 0
        Not many plugin-alternatives

      related Apache Maven posts

      Tymoteusz Paul
      Devops guy at X20X Development LTD · | 23 upvotes · 8M views

      Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).

      It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up or vagrant reload we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.

      I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.

      We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.

      If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.

      The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).

      Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.

      See more
      Ganesa Vijayakumar
      Full Stack Coder | Technical Lead · | 19 upvotes · 4.5M views

      I'm planning to create a web application and also a mobile application to provide a very good shopping experience to the end customers. Shortly, my application will be aggregate the product details from difference sources and giving a clear picture to the user that when and where to buy that product with best in Quality and cost.

      I have planned to develop this in many milestones for adding N number of features and I have picked my first part to complete the core part (aggregate the product details from different sources).

      As per my work experience and knowledge, I have chosen the followings stacks to this mission.

      UI: I would like to develop this application using React, React Router and React Native since I'm a little bit familiar on this and also most importantly these will help on developing both web and mobile apps. In addition, I'm gonna use the stacks JavaScript, jQuery, jQuery UI, jQuery Mobile, Bootstrap wherever required.

      Service: I have planned to use Java as the main business layer language as I have 7+ years of experience on this I believe I can do better work using Java than other languages. In addition, I'm thinking to use the stacks Node.js.

      Database and ORM: I'm gonna pick MySQL as DB and Hibernate as ORM since I have a piece of good knowledge and also work experience on this combination.

      Search Engine: I need to deal with a large amount of product data and it's in-detailed info to provide enough details to end user at the same time I need to focus on the performance area too. so I have decided to use Solr as a search engine for product search and suggestions. In addition, I'm thinking to replace Solr by Elasticsearch once explored/reviewed enough about Elasticsearch.

      Host: As of now, my plan to complete the application with decent features first and deploy it in a free hosting environment like Docker and Heroku and then once it is stable then I have planned to use the AWS products Amazon S3, EC2, Amazon RDS and Amazon Route 53. I'm not sure about Microsoft Azure that what is the specialty in it than Heroku and Amazon EC2 Container Service. Anyhow, I will do explore these once again and pick the best suite one for my requirement once I reached this level.

      Build and Repositories: I have decided to choose Apache Maven and Git as these are my favorites and also so popular on respectively build and repositories.

      Additional Utilities :) - I would like to choose Codacy for code review as their Startup plan will be very helpful to this application. I'm already experienced with Google CheckStyle and SonarQube even I'm looking something on Codacy.

      Happy Coding! Suggestions are welcome! :)

      Thanks, Ganesa

      See more
      Gradle logo

      Gradle

      16.9K
      9.5K
      254
      A powerful build system for the JVM
      16.9K
      9.5K
      + 1
      254
      PROS OF GRADLE
      • 110
        Flexibility
      • 51
        Easy to use
      • 47
        Groovy dsl
      • 22
        Slow build time
      • 10
        Crazy memory leaks
      • 8
        Fast incremental builds
      • 5
        Kotlin DSL
      • 1
        Windows Support
      CONS OF GRADLE
      • 8
        Inactionnable documentation
      • 6
        It is just the mess of Ant++
      • 4
        Hard to decide: ten or more ways to achieve one goal
      • 2
        Bad Eclipse tooling
      • 2
        Dependency on groovy

      related Gradle posts

      Shared insights
      on
      Apache MavenApache MavenGradleGradle
      at

      We use Apache Maven because it is a standard. Gradle is very good alternative, but Gradle doesn't provide any advantage for our project. Gradle is slower (without running daemon), need more resources and a learning curve is quite big. Our project can not use a great flexibility of Gradle. On the other hand, Maven is well-know tool integrated in many IDEs, Dockers and so on.

      See more
      Hajed Khlifi
      Shared insights
      on
      DockerDockerGradleGradleJava EEJava EE

      Hi, I'm working on dockerizing a heavy Java EE application where the process of installation requires a complex process maintained by a Gradle project we've developed to install, configure and customize specific jar files to generate a runnable server application at the end for the user. I'm new to Docker. As I said, the problem is that we have got a long process to install the app. The first alternative pop into my head is to put the installer Gradle project in the docker image and manage stateful data using the writable layer (in this case, I need to add Gradle too and the writable layer will be too heavy). Any advice! Thank you

      See more