Skip to content
This repository was archived by the owner on Sep 17, 2019. It is now read-only.
This repository was archived by the owner on Sep 17, 2019. It is now read-only.

Elasticsearch container will crash under default Docker for Mac configuration #6

Closed
@dustinrue

Description

@dustinrue
Contributor

The default Docker for Mac (and presumably Windows as well) configuration limits Docker to 2GB of memory. The default heap size for Elasticsearch is also 2GB which means the Elasticsearch container will immediately exit with a potentially confusing "error 137" (it's killed due to out of memory) as soon as it is interacted with, particularly with Cerebro.

Suggestion is to either note that you must configure Docker for Mac/Windows to allow for more than 2GB of memory in the readme or limit the heap size in Elasticsearch with:

environment:
          ES_JAVA_OPTS: "-Xms750m -Xmx750m"

Or add it to the existing config file.

Activity

khorolets

khorolets commented on Mar 9, 2017

@khorolets

Hi there! Can't confirm, I've just ran this compose on two macs with the limitations of 1gb RAM for docker. Haven't seen mentioned error.

dustinrue

dustinrue commented on Mar 9, 2017

@dustinrue
ContributorAuthor

When I allow 1GB of memory for Docker I get a crash loop

screenshot 2017-03-09 14 34 17

I'm using Docker 17.03.0-ce-mac2 (15657)

khorolets

khorolets commented on Mar 9, 2017

@khorolets

I'll provide screenshots in my next morning, in 12 hours. BTW I am not arguing, just saying that I can't reproduce it on two machines:)

dustinrue

dustinrue commented on Mar 9, 2017

@dustinrue
ContributorAuthor

Completely understand. What we'll find is one of us has a difference our setup that will help find the true cause.

khorolets

khorolets commented on Mar 10, 2017

@khorolets

As I promised. The same config on the second machine. This one is on Sierra and the second one is on El Capitan. I think the issue isn't about memory.

wp-docker_macos

pjrola

pjrola commented on Nov 8, 2017

@pjrola

This post helped a ton. I was running some other things on docker like Jenkins and MySql in combination with Elastic Search and Kibana. Pretty sure I hit the limit. I upped my memory to 4gb no issues now. Thanks

123avi

123avi commented on Jan 31, 2018

@123avi

@pjrola thank you so much, that really saved me :) I also updated the memory on the docker engine (docker menu -> preference) and that did the trick !

kiranreddyd

kiranreddyd commented on Jul 5, 2018

@kiranreddyd

Thanks a bunch! Helped me save so much time!

dnuttle

dnuttle commented on Jan 24, 2019

@dnuttle

I have seen an issue where Elasticsearch crashes when I start Logstash. I saw an OutOfMemoryError once and adjusted the heap size (1536m) and the Docker limit (3g). Now when I start Logstash, Elasticsearch just abruptly dies; there is nothing in the log at all. Oddly, just a few times I've gotten it to work (the Logstash conf file is very simple, just sends stdin to Elasticsearch), so I can't fathom what's happening. I also see that Logstash hogs a lot of the CPU while it's starting.

Javetz

Javetz commented on Feb 13, 2019

@Javetz

Update memory used to 4Gigs in Advanced Settings tab.

niemyjski

niemyjski commented on May 7, 2019

@niemyjski

This has not worked for me. I'm constantly getting crashes.

kadnan

kadnan commented on May 14, 2019

@kadnan

This post helped me too. I am running Cassandra and the other node was keep getting exited. I increased the RAM and it worked.

AlekKras

AlekKras commented on Jun 24, 2019

@AlekKras

This post helped me a lot with a completely different project. I didn't know about the memory problem. Saved me a ton of time! Thank you, @dustinrue !

philliphartin

philliphartin commented on Jul 25, 2019

@philliphartin

I've just been troubleshooting why an Elasticsearch based project wouldn't work and this was exactly the issue! Thanks for highlighting this.

mystredesign

mystredesign commented on Aug 19, 2019

@mystredesign

I upped my memory to be 7.5Gb 16Gb and still get the error... Does it really need this much memory?
I don't think this is resolved...
Update: I changed my swap space to 4GB and memory to 4GB and it seems to be working now... Still

dustinrue

dustinrue commented on Aug 19, 2019

@dustinrue
ContributorAuthor

@mystredesign the point is to reduce the amount of memory the JVM is allowed to use in relation to the amount of memory Docker is allowed to use. If you reduce the amount of memory the JVM is allowed to use and you still have issues then it'll be based on the number of docs you are trying to index.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Development

      No branches or pull requests

        Participants

        @Javetz@kadnan@dustinrue@kiranreddyd@niemyjski

        Issue actions

          Elasticsearch container will crash under default Docker for Mac configuration · Issue #6 · 10up/wp-local-docker