Posts tagged apache

apache webdav configuration

posted on 2016-12-29 10:27

notes up front

  • don't use suexec. just don't.
  • you should be able to configure a vhost on your own, else the apache config will not be of use to you
  • we'll use ssl certifcates, too


Load the apache modules:

a2enmod dav
a2enmod dav_fs

Create certificate:

cd /etc/apache2/ssl
openssl genrsa -out 1024  ## create private key
openssl req -new -key -out  ## create certificate signing request
openssl x509 -in  -out -req -signkey -days 3650  ## create certificate

Create a vhost config for your apache, and enable it:

<VirtualHost *:80>


        DocumentRoot /var/www/

        <Directory /var/www/>
                Options Indexes MultiViews
                AllowOverride None
                Order allow,deny
                allow from all

                DAV on

                AuthType Basic
                AuthName DAV
                AuthUserFile /var/www/
                Require valid-user

        ErrorLog /var/www/
        LogLevel warn
        CustomLog /var/www/ combined


<VirtualHost *:443>


        DocumentRoot /var/www/

        <Directory /var/www/>
                Options Indexes MultiViews
                AllowOverride None
                Order allow,deny
                allow from all

                DAV on

                AuthType Basic
                AuthName DAV
                AuthUserFile /var/www/
                Require valid-user

        ErrorLog /var/www/
        LogLevel warn
        CustomLog /var/www/ combined

        sslengine on
        sslcertificatefile      /etc/apache2/ssl/
        sslcertificatekeyfile   /etc/apache2/ssl/


Create a htpasswd file:

htpasswd -c /var/www/ USERNAME

USERNAME and the password you'll enter will be your access credentials.

testing (on linux)

apt install -y cadaver

Then your are promted for entering the user credentials. ls and help should help you onwards then.

using it with windows

Since this is a setup with SSL (doensn't make much sense to use plain http from my POV), you'll need to import the certifate ( in windows.

Else you will get an error along the lines of Mutual Authentication Failed. The Server's Password Is Out Of Date At The Domain Controller.

It needs to go there: (in windows)

  • win + r
  • certmgr.msc
  • enter
  • trusted root certificates
  • certificates

apache redirect not-existing urls to homepage

posted on 2016-11-22 18:50

RewriteEngine on
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)$ / [L,QSA]

apache rewrite non www to www

posted on 2016-11-16 16:20

For https hosts:

RewriteEngine on
RewriteCond %{HTTP_HOST} !^www\. [NC]
RewriteRule ^(.*)$ https://www.%{HTTP_HOST}%{REQUEST_URI} [R=301,L]

While trying out the settings above, you should use a 302 instead of a 301.

exclude .git and .svn from apache

posted on 2016-09-27 11:32

To prohibit the source control folders from being served by the webserver, use this in you apache vhost config:

<DirectoryMatch \.svn>
    Order allow,deny
    Deny from all

<DirectoryMatch \.git>
    Order allow,deny
    Deny from all

apache mod_pagespeed installation on debian 8

posted on 2016-08-29 09:36

In short for x86_64:

dpkg -i mod-pagespeed-stable_current_amd64.deb
service apache2 restart

apache htpasswd

posted on 2016-07-14 13:27

To password-protect a phpmyadmin interface via a .htpasswd authentication through the apache, try this in your vhost. (I prefer doing these things in the vhost instead of from within a .htaccess in the webfolder.)

`vim /etc/apache/sites-enabled/phpmyadmin.conf


<Directory /var/www/phpmyadmin/htdocs>
    Options -Indexes +ExecCGI +FollowSymLinks -MultiViews
    AllowOverride all

    AuthType basic
    AuthName "phpmyadmin pw"
    AuthUserFile    /var/www/phpmyadmin/.htpasswd
    Require   valid-user



Afterwards create a user (here called 'admin') in the .htpasswd file, which lies in the WEBROOT, not the DOCROOT of your hosting, so it cannot be changed via FTP access. (FTP is available only for htdocs folder in my example.)

htpasswd -c /var/www/phpmyadmin/.htpasswd admin

Then you will be prompted for entering a password twice.

A service apache2 reload to activate the changes afterwards and you are done.

Infrastructural website tuning

posted on 2016-05-15 16:38


The following is a write-up is leaning onto an artictle I read years ago, but this post's essence should still hold up today. It's written being targetet at PHP landscape, and is not something you need for your personal blog. Do use a static website generator if you care for speed there, your wordpress is gonna be hacked some day anyway.

If you already have two servers, one for your webserver, one for your database, this is more likely for you.
If you already have a loadbalancer in front of two webservers, this is definitely for you.

But even big webshops usually do not put such measures in place, except if they really do care about their response times and thus about their google ranking.

Mostly this is a guidance on how to tackle the customers favourite complaint: 'It is so slow!' and providing some background.

highlevel approaches

When trying to fix a 'slow' website, there are several approaches.

  • fix the website code
  • throw hardware at the problem
  • change the underlying infrastructure (software-wise)

First usually is not going to happen, as good web developers, especially in the PHP universe, are just rare. They fight their codebase and are happy when things are working correct. Performance comes second, and profiling their application is something they often never did or heard about.

Second was a nice solution, but these GHz numbers don't really improve that drastically as they did in the past. And since the memory wall gets hit, this solution also ceases to be a viable approach, no matter how fast you make your webserver connect to your database. SSD's do help, but only so much.

Which leaves us with option three, or the following measures in particular:

  1. SSD's (noted here for sake of completeness)
  2. handle sessions via redis
  3. separate static from dynamic content and serve each via different webservers
  4. browser caching
  5. accelerators, like squid or varnish in front
  6. opcode caches
  7. database caching via memcached to relieve the main database
  8. CDN's like akamai

fast storage

If you are about to migrate your website onto new hardware anytime soon (onto a new single server is what we talk about), think about getting SSD's. These have capacities of 250GB upwards, which will give you in a RAID10 setup like 500GB of usable redundantly persisted space. No matter how big your web presence is usually, after substracting 20GB for a linux operating system, this leaves you with plenty of diskspace for whatever your future may hold for you.

For budgeting reasons, a RAID1 setup of two SSD's providing still ~230GB of space is usually sufficient, except you plan on storing literally shitloads of FTP data or useless backups. Backups are to be done off-site on another server anyway. You don't need version control on your production server anyway (except for /etc maybe), except you think you are a true DevOp, fight others to the bone about the agile kool-aid and know jack-shit anyway.

But, no hard feelings, point is, just get SSD's if you can afford them.

session handling

This is sort of a pre-requisite, depending your overall approach, see the fazit at the bottom to see what this means.

If you happen to have a lot of sessions, this improves things a bit. Reading sessions from an in-memory database is just plain faster than letting the webserver getting them from the harddisk every time. This is only true if redis runs on the same machine as your webserver, as network latency is almost always higher than the latency of disk I/O operations. If you have a direct crosslink to your dedicated session server with 10G NIC's, this is not true, but if you have this in place you sure as hell do not need this whole article.

If you however split your load onto several webservers behind a loadbalancer, and want a real 'shared-nothing' architecture, things are different. In that case, you don't have your loadbalancer configured to use sticky session, and so you need a central place where your sessions are managed. 'Sticky sessions' simply mean, each user is served by the same webserver each time he visits your website.

Unless you really think you need a shared nothing installation or have REALLY many sessions, you don't need this.

Use redis instead of memcached for this, as the former can persist the in-memory data to disk.

static vs. dynamic content

Classify all your content in one of these two groups. Put all static content on a separate webserver, and let it be served by it only handling a subdomain of your site. This frees up resources on your 'dynamic' webserver. If you put both on the same hardware, if the dynamic webserver eats all the resources, it's no use doing that of course. You need another server in this case.

In case you read this, and have zero clue what static vs. dynamic means:

  • static content = html files on your website
  • dynamic content = html code generated from your php code which is then inserted in the already existing html code mentioned as 'static'

'dynamic content' here is not to be confused with 'dynamic html', which rather means client-side changing of a website through changes to the DOM tree via javascript in the form of 'asynchronous javascript and xml' (AJAX).

browser caching

Classify your content into things that, maybe like this:

  • never change (6 months caching time)
  • seldom change (1 week)
  • often (1 day)
  • always (1 minute)

This is just rough guidance from the top of my head, adjust to your needs.

Set the caching headers of your HTTP packets accordingly, and let the users browser help you reduce your servers' load.

To still be able to exchange old content with new one, add hashes to the URLs of your 'never-changes' content, to make sure when things change, new content will be served no matter what cache expiration times you use. These hashes have to be created during your deployment process automatically and be inserted to your application code, also automatically.

This is actually something rather sophisticated, but otherwise you have the same problem as with using 301 Redirects: Caching times and permanent redirects don't forgive fuck-ups on your behalf.

reverse-proxies for accellerating things

If you already seperated dynamic vs. static content, what sense does it make to put an accelerator in form of a reverse-proxy like squid or varnish up front, too?

Accelerators do create like 'static snapshots' of your combined static-dynamic content, and serves them directly. An accelerator does create static html code to be served from the already existing static html parts and the html generated from the interpreted php code.

Some made up numbers for a big website and requests being possibly served per second:

  • dynamic content webserver: 100
  • static content webserver: 5k
  • accelerator: 250k

Sounds good?

Important is to differentiate HTTP GET and all the other requests. GET's don't change things, POST's or PUT's and such do. GET are served by the accelerator, but the others must pass all your caching layers.

opcode caches

PHP instructions are parsed and translated into operation codes (machine language) in the process of their execution through the php interpreter. To speed up things, opcode caches like APC do basically precompile the instruction to speed up the php execution.

Like up to three times faster your website can become, just through the opcode cache.

database caching

memcached is a in-memory key-value store, caching often-used data from the database. Any questions why this might be beneficial? ;)

Sidenote: There exist memcache and memcached which are separate programs, don't get confused by that.

content delivery networks

In case you have seriously big traffic spikes (read: if you wonder about this happening, you don't), you don't need a CDN.

CDN's are put in place by exchanging the subdomain pointing to the data served by your static webservers, to another subdomain pointing to the CDN. This is helpful if you have had single times like special days where you knew your load to be ridiculously high where you'd need a lot more serving power than you usually do.

If you need your CDN to not just serve static content but complete sites, you can't just use your own loadbalancer. The CDN's loadbalancer must be configured and put to work.

So instead of getting more machines yourself, set things up accordingly and employ a CDN of your choice. Akami is rather good.

Else your machines will idle around for 99,99% of all the time you have them in place, and would have a hard time making profit out of them. Never wondered why amazon is such a huge cloud provider? That's just their machines that would otherwise be doing nothing since christmas is just not there already.


In case you have a single server where a webserver and a database server run on, what are the easiest steps for speeding up things?

  1. opcode cache
  2. accelerator
  3. memcached

Also the SSD's help, but usually you get them up front, not after your installation is already running, as there are migration fees to be paid if you want your provider to reinstall your hosting.

Further you do categorize your content, and implement caching.

Once all this is done, implement caching.

The next step would be more hardware and distributing the load onto several webservers.

So what if you already have more than one server?

Do the first three points mentioned above and caching.

Then set up dedicated session handling, so your load will be distrubuted more evenly accross your servers, when using a loadbalancer.

For setting everything else up, you should know what you are doing and not just be reading this.

apache: .htaccess redirect all

posted on 2016-03-08 17:11:39

To redirect every incoming request to a new URL:

RewriteEngine on 
RewriteRule ^(.*)$$1 [R=301,L]

This will redirect everything, try it with 302 first, instead of 301. 301 happens to be permanent, if you mess something, people have to clear their browser caches...

apache: set multiple origin domains

posted on 2016-02-19 13:14:51

Exchange your domain with MYDOMAIN\.DE and WWW\.MYDOMAIN\.DE:

SetEnvIf Origin "^http(s)?://(.+\.)?(MYDOMAIN\.DE|WWW\.MYDOMAIN\.DE)$" origin_is=$0 
Header always set Access-Control-Allow-Origin %{origin_is}e env=origin_is

Put these lines in a config of your choice, likely in /etc/apache2/include.d/<filename>.conf.

Apache: rewrite logging

posted on 2015-06-19 22:35:40

To debug failing redirects and failing rewrite rules, try enabling this log.

After enabling it, tail -f /var/log/apache2/rewrite.log and try the site you want to debug in your browser / via curl.

When you are done, comment the options out again, as they WILL fill your disk up.

apache < 2.4

RewriteEngine On RewriteLogLevel 3 RewriteLog /var/log/apache2/rewrite.log

LogLevel warn

apache == 2.4

RewriteEngine on RewriteLog /var/log/apache2/rewrite.log

LogLevel warn rewrite:trace3


You might have to do this:

mkdir /var/log/apache2/ && touch /var/log/apache2/rewrite.log


posted on 2015-02-18 13:45:50

apachebench (long for ab) is dubbed the 'Apache HTTP server benchmarking tool'.

Straight from the man page:

ab is a tool for benchmarking your Apache Hypertext Transfer Protocol (HTTP) server. It is designed to give you an impression of how your current Apache installation performs. This especially shows you how many requests per second your Apache installation is capable of serving.

An usage example here:

ab -n200 -c25 <url>, where n is the number of requests, and c the number of concurrent ones.

[sjas@ctr-014 ~]% ab -n200 -c25                                                                                 
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd,
Licensed to The Apache Software Foundation,

Benchmarking (be patient)
Completed 100 requests
Completed 200 requests
Finished 200 requests

Server Software:        gws
Server Hostname:
Server Port:            80

Document Path:          /
Document Length:        218 bytes

Concurrency Level:      25
Time taken for tests:   0.541 seconds
Complete requests:      200
Failed requests:        0
Write errors:           0
Non-2xx responses:      200
Total transferred:      114800 bytes
HTML transferred:       43600 bytes
Requests per second:    369.41 [#/sec] (mean)
Time per request:       67.676 [ms] (mean)
Time per request:       2.707 [ms] (mean, across all concurrent requests)
Transfer rate:          207.07 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       11   11   0.2     11      12
Processing:    46   51  22.5     49     304
Waiting:       46   51  22.5     49     304
Total:         57   62  22.5     60     315

Percentage of the requests served within a certain time (ms)
  50%     60
  66%     60
  75%     61
  80%     61
  90%     61
  95%     62
  98%     66
  99%    249
 100%    315 (longest request)

OpenSSL Ciphers

posted on 2014-12-10 18:00:06

This will show you the currently availably secure ciphers:

openssl ciphers 'HIGH:MEDIUM:!MD5:!RC4:!aNULL:!eNULL:!EXPORT:!SEED:!PSK' | sed 's/:/\n/g'

The sed afterwards is just so you will have the output with one cipher per line.

The part after 'ciphers' and before the pipe is what you actually have to put into your apache config.

Apache redirecting

posted on 2014-11-09 17:35:27

A sample for apache redirecting, very useable snippet for use from within a .htaccess:

RewriteEngine On
RewriteCond %{REQUEST_URI} ^/$
RewriteRule (.*) [L,R=301]

You might want to change the URL. ;)

Apache, mod_proxy, tomcat, two ip's on Debian

posted on 2014-07-31 13:45:12

To get an apache running to serve different ip's and sites at once, all on port 80, plus handing requests through to tomcat, this guide tries to explain the neccesary steps.


First, set up a second ip for proper networking:


# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

#allow-hotplug eth0
auto eth0 
iface eth0 inet static

auto eth0:1
iface eth0:1 inet static

For security reasons, the actual subnet used was exchanged to 10.0.0.. Use your own. :)

IP 1 is, IP 2 is here.

Do not forget to take the interface up afterwards:

$ ifdown eth0
$ ifup eth0

Also do not use service networking restart, it is a deprecated command.
Do not use ip l set eth0 down and ip l set eth0 up for this. It will bring the link back up, but you won't have ip addresses assigned. For more information, the iproute2 tool suite is really mighty, but you may need some more in-depth-knowledge.


$ ip a

should show you something like this:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:25:90:ea:45:ac brd ff:ff:ff:ff:ff:ff
    inet brd scope global eth0
    inet brd scope global secondary eth0:1

Then eth0 has state UP (not DOWN) and you see both IP's properly assigned. If you do not use a syntax like eth0:1 for the second ip in /etc/network/interfaces, you will only see one ip shown by the deprecated ifconfig command!


Tomcat setting should best be left untouched, so it uses localhost and port 8080 to listen on.



   <Connector port="8080" protocol="HTTP/1.1"


If apache's mod_proxy was not be used, here for address the second ip could be set (, and port to 80. However you'd need a linux system account, if you want to use a port below 1024. If you do not want this, you have to use either mod_proxy, mod_proxy_ajp, or mod_jk. The latter is the fastest and has most setting, but sure is more complex, too. mod_proxy_ajp is in between both, speed-wise. mod_proxy however works with any backend, not just tomcat or other servlet containers.




Listen 80
Listen 443

Note that, you may need to drop the 443 lines, if you do not use https. The NameVirtualHost directive tells apache, to enable name-based virtual host support. This is needed, since our apache serves several domains. If the directive were to be omitted, then apache would only ever serve the first domain it would have in it's loading process. (Can be shown via apache2ctl -S.)

Since Tomcat serves only one site, no name-based virtual hosting is needed for it, thus no entry is needed.

virtualhost configs

Further is assumed, that you already have two existing vhost files, which are properly structured, are enabled and work, for each domain. The sites are named, and and already reside in /etc/apache2/sites-available.

First IP:





Second IP:



    ProxyPass / http://localhost:8080/
    ProxyPassReverse / http://localhost:8080/

The 000-, 001- and 002- are just prefixes, to ensure the order of the pages being loaded.


Enable the apache proxy module.

$ a2enmod proxy
$ a2enmod proxy_http


Enable the vhost configs and restart the web server.

$ a2ensite
$ a2ensite
$ a2ensite 002-proxy-for-tomcat
$ service apache2 restart

Certificates, OpenSSL in depth and GnuTLS

posted on 2014-07-10 14:37:52

This post should give an overview on the most used OpenSSL commands, and how SSL/TLS/X.509 in general works.

Since this post was written a long time ago, it might get revisited in the future. But this will be a major overhaul, so this will not happen in the near future either.

But there will come some ascii art on a schematic PKI in general, the section about the filenames will get cleaned up as well as the openssl section.

post vocabulary and some notes

The most used terms are abbreviated in the following.

PK = Private Key
C = Certificate
CSR = Certificate Signing Request
CA = Certificate Authority

Usually this seems way harder than it is in reality, once you get the hang of it. Hardest part is to understand which file belonging to which server is needed for the current step.


Some more abbreviations first:

SSL : Secure Sockets Layer
TLS : Transport Layer Security
X.509 : Public Key Infrastructure (PKI) and Priviledge Management Infrastructure (PMI) standard by the "International Telecommunication Union Telecommunication Standardization Sector" (ITU-T).

SSL and its successor TLS, which includes SSL, are protocols for encrypting internet communication. The C infrastructure setup is defined in the X.509 standard. That is why these acronyms are popping up in any discussion about this topic.

On a sidenote, a more general equation:


Since this post is focused on usability, the techniques in question that are used in a PKI or PMI are of no concern here.

The C chain looks usually like this: (intermeadiates can, but need not exist)

  1. Root C
  2. Intermediate C
  3. C

The last C is the one issued by the CA where you subitted your CSR to.

Only if all C's are present and used correctly, SSL checking tools (See here or here.) will tell you your C's are set up accordingly.

File types

There exist a bunch of file types, you have to be able to differentiate.

file types

.key : private key file (PK), but that's just a convention
.csr : certificate signing request (CSR)
.crt : certificate (C)
.cer : certificate (C), Microsoft used this naming scheme earlier

For .pem and .der files, see next section.

PK.key, CSR.csr, C.crt are kind of placeholders for your actual filenames in the following sections. A good naming scheme would be subdomain_domain_tld-year, without dots. Dots happen to either not work or cause other problems. Appending the year your C was issued helps with distinguishing in case you renew a certain certificate.

containers and encodings

Containers are used for grouping together C's (and) into a single file.

.pem: ascii / base64 encoded container
.der: container in binary format

The extension hints at the encoding being used, for the container. A container usually consists of the set of all C's (the entire trust chain), and can optionally also contain the PK.

All the files from the section before can be in PEM or DER format, IIRC!

For more information on the Distinguished Encoding Rules (DER) or the Privacy-enhanced Electronic Mail (PEM), just click these links.


PK / CSR generation

For usage with Certificate Authorities (CA's)

Generate a PK and a CSR:

openssl req -out CSR.csr -new -newkey rsa:2048 -nodes -keyout PK.key

If you already have an existing PK and just need a CSR:

openssl req -out CSR.csr -key PK.key -new

Create a new CSR for an existing C:

openssl x509 -x509toreq -in C.crt -out CSR.csr -signkey PK.key

Complete self-signed certificate

Generation of a self-signed (ss) C, based on a newly generated PK with a term of validity of one year (365 days):

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout PK.key -out C.crt

ss-C's for https are still better than traffic over plain http, but for private websites for example, StartSSL Certificates provide C's for free. Free as in 'no money needed'.

convert PEM to DER

openssl x509 -in C.crt -outform der -out C.der

convert DER to PEM

openssl x509 -in C.crt -inform der -outform pem -out C.pem

viewing PEM encoded files containing a C

For debugging reasons, this might actually be the most used command.

openssl x509 -in C.pem -text -noout
openssl x509 -in C.crt -text -noout
openssl x509 -in C.cer -text -noout

This will not work on a single PK file.


Get it:

apt-get install gnutls-bin -y



Instead of the openssl tool suite, this is actually self-explanatory.


In the following, keyfiles are called .key extension-wise, but that is just a name differentiation. They are in reality just .pem files, too, but with this practice files are easier to differentiate.

generate PK's (private keys)

certtool --generate-privkey --outfile PK.key --rsa

Use --dsa or --ecc flags if you want to change the used cryptosystem.

generate CSR's (certificate signing requests)

certtool --generate-request --load-privkey PK.key --outfile CSR.pem

generate C (certificate) from CSR (certificate signing request)

Usually this is a CA_C.pem, a CA certificate.

certtool --generate-certificate --load-ca-privkey CA_PK.key --load-ca-certificate CA_C.pem --load-request CSR.pem --outfile C.pem

generate C (certificate) from PK (private key), lacking a CSR

certtool --generate-certificate --load-ca-privkey CA_PK.key --load-ca-certificate CA_C.pem --load-privkey PK.key --outfile C.pem

generate a self-signed C (certificate), the fast way

certtool --generate-privkey --outfile CA_PK.key --rsa
certtool --generate-self-signed --load-privkey CA_PK.key --outfile CA_C.pem

Here's a one-liner to copy-paste:

certtool --generate-privkey --outfile CA_PK.key --rsa && certtool --generate-self-signed --load-privkey CA_PK.key --outfile CA_C.pem

create a .p12 / pkcs #12 container file

A .p12 file includes all three part usually needed on the server side:

  • CA certificate

  • server PK

  • server C

    certtool --to-p12 --load-ca-certificate CA_C.pem --load-privkey PK.key --load-certificate C.pem --outfile CONTAINER.p12 --outder

show certificate information

certtool --certificate-info --infile C.pem

Python via Apache

posted on 2014-07-07 11:39:12

Use mod_wsgi instead of mod_python.

An example can be found here.

Certificate content viewed with OpenSSL

posted on 2014-07-07 11:27:04

To show the contents of a ssl/tls certificate with openssl, do:

openssl x509 -in C.crt -text -noout

where C.crt is the name of your actual certificate. Should work for .pem, .cer and .key files as well.

Apache redirect http traffic to https

posted on 2014-06-30 10:30:09

There are two approaches to this. Either use a redirect or a rewrite rule.

Rewrite, the officially recommended method

NameVirtualHost *:80
<VirtualHost *:80>
   DocumentRoot /usr/local/apache2/htdocs 
   Redirect permanent /

<VirtualHost _default_:443>
  DocumentRoot /usr/local/apache2/htdocs
  SSLEngine On
 # etc...

It's a little faster than a rewrite plus it is officially recommended. However behind a SSL offloader its said not to work, I read on stackoverflow. My guess would be this case could be fixed through the use of HTTP headers, but I currently have no setup where I can verify that without breaking things and wreaking havoc.

Rewrite, for completeness sake (and in case #1 did not work)

Use this in your vhost configuration:

RewriteEngine on
RewriteCond %{HTTPS} !=on
RewriteRule ^/?(.*) https://%{SERVER_NAME}/$1  [R=301,L,QSA]

To redirect just specific http traffic concerning a specific folder, use a different rewrite rule:

RewriteRule ^/?name-of-your-folder/(.*) https://%{SERVER_NAME}/name-of-your-folder/$1 [R=301,L,QSA]

Apache webserver redirect to subfolder

posted on 2014-06-24 11:21:17

Put into an .htaccess which resides in your web root folder:

RewriteEngine On
RewriteCond %{REQUEST_URI} ^/$
RewriteRule (.*) /name-of-subfolder-to-redirect-to/ [L,R=301]

When omitting the R=301 flag, the URL will not be rewritten in the browser's address bar.

This blog covers .csv, .htaccess, .pfx, .vmx, /etc/crypttab, /etc/network/interfaces, /etc/sudoers, /proc, 10.04, 14.04, AS, ASA, ControlPanel, DS1054Z, GPT, HWR, Hyper-V, IPSEC, KVM, LSI, LVM, LXC, MBR, MTU, MegaCli, PHP, PKI, R, RAID, S.M.A.R.T., SNMP, SSD, SSL, TLS, TRIM, VEEAM, VMware, VServer, VirtualBox, Virtuozzo, XenServer, acpi, adaptec, algorithm, ansible, apache, apachebench, apple, applet, arcconf, arch, architecture, areca, arping, asa, asdm, autoconf, awk, backup, bandit, bar, bash, benchmarking, binding, bitrate, blackarmor, blockdev, blowfish, bochs, bond, bonding, booknotes, bootable, bsd, btrfs, buffer, c-states, cache, caching, ccl, centos, certificate, certtool, cgdisk, cheatsheet, chrome, chroot, cisco, clamav, cli, clp, clush, cluster, coleslaw, colorscheme, common lisp, configuration management, console, container, containers, controller, cron, cryptsetup, csync2, cu, cups, cygwin, d-states, database, date, db2, dcfldd, dcim, dd, debian, debug, debugger, debugging, decimal, desktop, df, dhclient, dhcp, diff, dig, display manager, dm-crypt, dmesg, dmidecode, dns, docker, dos, drivers, dtrace, dtrace4linux, du, dynamictracing, e2fsck, eBPF, ebook, efi, egrep, emacs, encoding, env, error, ess, esx, esxcli, esxi, ethtool, evil, expect, exportfs, factory reset, factory_reset, factoryreset, fail2ban, fbsd, fdisk, fedora, file, filesystem, find, fio, firewall, firmware, fish, flashrom, forensics, free, freebsd, freedos, fritzbox, fsck, fstrim, ftp, ftps, g-states, gentoo, ghostscript, git, git-filter-branch, github, gitolite, global, gnutls, gradle, grep, grml, grub, grub2, guacamole, hardware, haskell, hdd, hdparm, hellowor, hex, hexdump, history, howto, htop, htpasswd, http, httpd, https, i3, icmp, ifenslave, iftop, iis, imagemagick, imap, imaps, init, innoDB, innodb, inodes, intel, ioncube, ios, iostat, ip, iperf, iphone, ipmi, ipmitool, iproute2, ipsec, iptables, ipv6, irc, irssi, iw, iwconfig, iwlist, iwlwifi, jailbreak, jails, java, javascript, javaws, js, juniper, junit, kali, kde, kemp, kernel, keyremap, kill, kpartx, krypton, lacp, lamp, languages, ldap, ldapsearch, less, leviathan, liero, lightning, links, linux, linuxin3months, lisp, list, livedisk, lmctfy, loadbalancing, locale, log, logrotate, looback, loopback, losetup, lsblk, lsi, lsof, lsusb, lsyncd, luks, lvextend, lvm, lvm2, lvreduce, lxc, lxde, macbook, macro, magento, mailclient, mailing, mailq, manpages, markdown, mbr, mdadm, megacli, micro sd, microsoft, minicom, mkfs, mktemp, mod_pagespeed, mod_proxy, modbus, modprobe, mount, mouse, movement, mpstat, multitasking, myISAM, mysql, mysql 5.7, mysql workbench, mysqlcheck, mysqldump, nagios, nas, nat, nc, netfilter, networking, nfs, nginx, nmap, nocaps, nodejs, numberingsystem, numbers, od, onyx, opcode-cache, openVZ, openlierox, openssl, openvpn, openvswitch, openwrt, oracle linux, org-mode, os, oscilloscope, overview, parallel, parameter expansion, parted, partitioning, passwd, patch, pct, pdf, performance, pfsense, php, php7, phpmyadmin, pi, pidgin, pidstat, pins, pkill, plasma, plesk, plugin, posix, postfix, postfixadmin, postgres, postgresql, poudriere, powershell, preview, profiling, prompt, proxmox, ps, puppet, pv, pveam, pvecm, pvesm, pvresize, python, qemu, qemu-img, qm, qmrestore, quicklisp, quickshare, r, racktables, raid, raspberry pi, raspberrypi, raspbian, rbpi, rdp, redhat, redirect, registry, requirements, resize2fs, rewrite, rewrites, rhel, rigol, roccat, routing, rs0485, rs232, rsync, s-states, s_client, samba, sar, sata, sbcl, scite, scp, screen, scripting, seafile, seagate, security, sed, serial, serial port, setup, sftp, sg300, shell, shopware, shortcuts, showmount, signals, slattach, slip, slow-query-log, smbclient, snmpget, snmpwalk, software RAID, software raid, softwareraid, sophos, spacemacs, spam, specification, speedport, spi, sqlite, squid, ssd, ssh, ssh-add, sshd, ssl, stats, storage, strace, stronswan, su, submodules, subzone, sudo, sudoers, sup, swaks, swap, switch, switching, synaptics, synergy, sysfs, systemd, systemtap, tar, tcpdump, tcsh, tee, telnet, terminal, terminator, testdisk, testing, throughput, tmux, todo, tomcat, top, tput, trafficshaping, ttl, tuning, tunnel, tunneling, typo3, uboot, ubuntu, ubuntu 16.04, udev, uefi, ulimit, uname, unetbootin, unit testing, upstart, uptime, usb, usbstick, utf8, utm, utm 220, ux305, vcs, vgchange, vim, vimdiff, virtualbox, virtualization, visual studio code, vlan, vmstat, vmware, vnc, vncviewer, voltage, vpn, vsphere, vzdump, w, w701, wakeonlan, wargames, web, webdav, weechat, wget, whois, wicd, wifi, windowmanager, windows, wine, wireshark, wpa, wpa_passphrase, wpa_supplicant, x11vnc, x2x, xfce, xfreerdp, xmodem, xterm, xxd, yum, zones, zsh

Unless otherwise credited all material Creative Commons License by sjas