Files
nairobi/Dockerfile
Dominik Peters bbcdb6f086 Add chromium-driver to Dockerfile
In the documentation about using docker, it says:

> System tests also work out of the box, although they might fail the
> first time while the tool running the tests downloads the right
> version of Chromedriver (which is needed to run them), and only
> "headless" mode (with a browser running in the background) is
> supported, which is the mode you'd probably use more than 95% of the
> time anyway.  For example, to run the tests for the homepage:
>
> POSTGRES_PASSWORD=password docker-compose run app bundle exec \
> rspec spec/system/welcome_spec.rb

For me, as predicted, the tests fail the first time, but they continue
to fail after. The errors are of form:

```
Failure/Error: example.run
Selenium::WebDriver::Error::WebDriverError:
  unable to connect to /home/consul/.cache/selenium/chromedriver/linux64
  /132.0.6834.110/chromedriver 127.0.0.1:9515
  # /usr/local/bundle/gems/selenium-webdriver-4.25.0/lib/selenium
  # /webdriver/common/service_manager.rb:132:in `connect_until_stable'
  # ... omitted ...
  # ./spec/spec_helper.rb:41:in `block (3 levels) in <top (required)>'
  # /usr/local/bundle/gems/i18n-1.14.6/lib/i18n.rb:353:in `with_locale'
  # ./spec/spec_helper.rb:40:in `block (2 levels) in <top (required)>'
```

Installing chromium-driver in the Dockerfile fixed it for me.
2025-03-01 18:57:38 +01:00

64 lines
2.0 KiB
Docker

FROM ruby:3.2.6-bookworm
ENV DEBIAN_FRONTEND=noninteractive
# Install essential Linux packages
RUN apt-get update -qq \
&& apt-get install -y \
build-essential \
cmake \
imagemagick \
libappindicator1 \
libpq-dev \
libxss1 \
memcached \
pkg-config \
postgresql-client \
sudo \
unzip
# Install Chromium for E2E integration tests
RUN apt-get update -qq && apt-get install -y chromium chromium-driver
# Files created inside the container repect the ownership
RUN adduser --shell /bin/bash --disabled-password --gecos "" consul \
&& adduser consul sudo \
&& echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
RUN echo 'Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/bundle/bin:/usr/local/node/bin"' > /etc/sudoers.d/secure_path
RUN chmod 0440 /etc/sudoers.d/secure_path
# Define where our application will live inside the image
ENV RAILS_ROOT=/var/www/consul
# Create application home. App server will need the pids dir so just create everything in one shot
RUN mkdir -p $RAILS_ROOT/tmp/pids
# Set our working directory inside the image
WORKDIR $RAILS_ROOT
# Install Node
COPY .node-version ./
ENV PATH=/usr/local/node/bin:$PATH
RUN curl -sL https://github.com/nodenv/node-build/archive/master.tar.gz | tar xz -C /tmp/ && \
/tmp/node-build-master/bin/node-build `cat .node-version` /usr/local/node && \
rm -rf /tmp/node-build-master
# Use the Gemfiles as Docker cache markers. Always bundle before copying app src.
# (the src likely changed and we don't want to invalidate Docker's cache too early)
COPY .ruby-version ./
COPY Gemfile* ./
RUN bundle install
COPY package* ./
RUN npm install
# Copy the Rails application into place
COPY . .
ENTRYPOINT ["./docker-entrypoint.sh"]
# Define the script we want run once the container boots
# Use the "exec" form of CMD so our script shuts down gracefully on SIGTERM (i.e. `docker stop`)
# CMD [ "config/containers/app_cmd.sh" ]
CMD ["bundle", "exec", "rails", "server", "-b", "0.0.0.0"]