InteractiveWebs WHMCS Admin Login – Please complete the captcha and try again – Safari
When trying to login to WHMCS admin, you receive an error “Please complete the captcha and try again.”
In our experience this occurred with Safari on a Mac, and after a recent update.
On other browsers, this was not an issue at the time of writing this article.
Our WHMCS system has Invisible reCAPTCHA enabled
The Solution
It appears that a recent update in May 2023 has enabled a feature that “Hide IP address from trackers”
This is the feature that when enabled made the reCAPTCHA fail consistently on the admin login. Turn this feature off and the problem goes away. Hope this helps others.
This post WHMCS Admin Login – Please complete the captcha and try again – Safari first appeared on InteractiveWebs and is written by %AUTHOR%
InteractiveWebs WordPress Geo Redirection using Cloudflare
So the problem for us started out as this. We have a WordPress website that has content that is location specific. This content has been setup in WordPress using Elementor with essentially a duplication of the core site pages like contact-us, about-us, homepage, services, etc that had specific content if someone was in Europe vs Australia.
So for example, the home page has images related to either Australia or UK
.
Kind of a different story than the context of this post, but within Elementor Header we have created 4 headings. These headings use the flags with set links in them to allow someone to redirect to a URL that will show UK content or not.
Each of the 4 headers have one of two menus, either a UK menu with links to all the UK content specific pages, or NON UK pages that are intended for all other countries but specifically Australia.
The idea is that just like a normal site you may have: home, about-us, contact-us etc. giving domain.com/contact-us/ for the base of the site, then content specific for UK with domain.com/contact-us/uk/ taking the user to a UK specific version of the contact us page. All the headers linking as they should to the relevant versions or the pages.
So we have several redirection needs.
1. Google indexes the content and in time displays home pages linking that briefs the relevant content for searches in Europe that are SEO friendly there and the same for Australia.
2. A user directly entering a URL like domain.com/contact-us or domain.com/contact-us/ needs to end up at the correct pages based on their Geo Location. This had to work for all pages on the site with specific content.
3. We have the pages cached and on page code optimised for SEO. Nothing we do can slow this down or mess with this.
4. It has to be fast and apparently effortless for the end user.
We initially looked for the easy way out with the possibility of using a simple GIO Redirection plugin the WP site. This would work well for a single page like the home page, but we could not find a free module that would handle all our requirements. Only about 10 pages but not just 1.
SEO and PageSpeed Considerations
Because our site has been optimised for WordPress SEO with a combination of:
To name a few, we have a comprehensive setup that delivers extremely fast content worldwide. The configuration of Cloudfront is using the free tier service to optimise many things, but specifically DNS services for this site. One of the Cloudflare services that is free to use is Geo Redirection using Transform Rules.
Login to Cloudflare and select your domain in question.
Select Rules / Transform Rules
Create a New Rewrite URL
Fill in the fields for either Country or Continent, equals, Europe in our example
AND
URI equals /. (Representing the home page or domain.com)
The Rewrite a Path with a Dynamic rule: concat(http.request.uri.path, “/home/uk/”)
Note that in our example above, the home page of the site outside Europe loads on domain.com and the home page of the Europe pages is domain.com/home/uk/. This is why our rule adds “/home/uk/“ for the home page rule.
Save the rule
We then create other rules for other pages on your site. We specifically wanted an end user to be able to type the rule URL with / at the end or use the URL to a page without the /. So for example the domain.com/bookkeeping-services/ is an example of a page. We want a user to be able to type domain.com/bookkeeping-services, with the redirection delivering the user to /bookkeeping-services/uk/ if they are in Europe.
This is achieved by searing for bookkeeping-services without the UK, to take into account that if someone is already searching for this address typing /uk/ in the URL, then we do not want to add a second /uk/ to the url making it invalid domain.com/bookkeeping-services/uk/uk/
Not how it is making sure there is not already a “uk” in the page name before adding it to the URL. This worked well and ensured that all our goals were met.
I noted that there is a real lack of information about dynamic rules and how to set them up. Some examples but very few with dynamic redirections. Some people discussing this and asking questions, but I also noted that the coders answering the questions often referred people to create worker processes in Cloudflare, which I don’t really see the point for something as simple as this.
Anyway good luck if you are replicating this process and feel free to post comments that may help others.
This post WordPress Geo Redirection using Cloudflare first appeared on InteractiveWebs and is written by %AUTHOR%
InteractiveWebs Magento File Size Keeps Growing
You find that the file size keeps growing and that your web server is taking up GB of data.
Magento keeps creating reports that can take up a lot of data. This can be a problem as web host for backup management and file system management.
If you do not need the reports, it is safe to delete them from: /var/report
https://magento.stackexchange.com/questions/58166/cleaning-up-magento-installation-which-files-folders-can-be-deleted
This post Magento File Size Keeps Growing first appeared on InteractiveWebs and is written by %AUTHOR%
InteractiveWebs cPanel AutoSSL certificate expiry on Date – Cloudflare Enabled domain
You may receive some warning in an email from your cPanel with error that look something like this after you have enabled and and moved your DNS servers over to Cloudflare. This occurs after enabling the option in Cloudflare to always use HTTPS.
Errors in this case in cPanel reported:
DNS DCV: No local authority: “domain.com.au”; HTTP DCV: “cPanel (powered by Sectigo)” forbids DCV HTTP redirections.
It is likely that you have enabled the feature in Cloudflare to “Always Use HTTPS” (which you would think is safe) but in this case it will cause this error. It needs to be disabled for a simple fix.
In Cloudflare after logging in and selecting the domain in question, open up the SSL/TSL Menu and select Edge Certificates.
Then select “Always Use HTTPS” to Off.
Then return to your cPanel SSL/TLS Status and rerun the Auto SSL process. This time it should fix the issue.
That is about it. There are technically ways to enable them to play together if you need to enable this option but consider that you can still enable the “Force HTTPS Redirects” in cPanel Domain settings to ensure that you site is always being accessed via the HTTPS connection. This is seperate to the similar named Cloudflare setting and still should give your site the security that users need. It is really only and advanced option to handle this in the Cloudflare setting.
This post cPanel AutoSSL certificate expiry on Date – Cloudflare Enabled domain first appeared on InteractiveWebs and is written by %AUTHOR%
InteractiveWebs Reduce Exchange 2016 Mailbox Database White Space
Reduce Exchange 2016 Mailbox Database size using Eseutil.
In this post I will walk through the steps of reducing the Exchange Mailbox Database size, However the method of reducing the Mailbox Database size varies from different administrators. If you can afford to have downtime on a Mailbox Database then these steps would work for you. Some Exchange Administrators and in most cases myself ,will simply just create another new Mailbox Database and Move Mailboxes from one Database to another and then delete the old one.
For the purpose of this post , I will be using a build in tool from Microsoft called Eseutil.
Eseutil is a command line utility that works with Extensible Storage Engine (ESE), database (.edb) files, streaming (.stm) files, and log (.log) files associated with an Information Store, in a given Storage Group. The tool can be run on one database at a time from the command line and can be used to perform a range of database tasks from repair, offline defragmentation, and integrity checks in Exchange Server .
The most common Eseutil switches are listed in the table below.
Eseutil ModeSwitchDescriptionDefragmentation/DEseutil defragments the database files. This mode reduces the gross size on disk of the database (.edb) and streaming files (.stm) by discarding most empty pages and ad hoc indexes.
For more information, see these topics:
Eseutil /D Defragmentation ModeHow to Run Eseutil /D (Defragmentation)Repair/PEseutil repairs corrupt database pages in an offline database but discards any that can’t be fixed. In repair mode, the Eseutil utility fixes individual tables but does not adjust the relationships between tables. ISInteg should be used to check logical relationships between tables.
Eseutil /P Repair ModeHow to Run Eseutil /P (Repair) in Different ScenariosRestore/CEseutil displays the Restore.env file and controls hard recovery after restoration from online backup.
Eseutil /C Restore ModeHow to Run Eseutil /C (Restore) in Different ScenariosRecovery/REseutil replays transaction log files or rolls them forward to restore a database to internal consistency or to bring an older copy of a database up to date.
Eseutil /R Recovery ModeHow to Run Eseutil /R in Recovery ModeIntegrity/GEseutil verifies the page level and Extensible Storage Engine (ESE) level logical integrity of the database but does not verify database integrity at the Information Store level.
Eseutil /G Integrity ModeHow to Run Eseutil /G in Integrity ModeFile Dump/MEseutil displays headers of database files, transaction log files, and checkpoint files. The mode also displays database space allocation and metadata.
Eseutil /M File Dump ModeHow to Run Eseutil /M in File Dump ModeChecksum/KEseutil verifies checksums on all pages in the database and streaming files.
Eseutil /K Checksum ModeHow to Run Eseutil /K in Checksum ModeCopy File/YEseutil performs a fast copy of very large files.
Eseutil /Y Copy File ModeHow to Run Eseutil /Y in Copy File Mode
To get started I will check the current available whitespace for the Mailbox Database DB1 and then Dismount the Database.
1Get-MailboxDatabase DB1 -Status | Format-List Name, DatabaseSize, AvailableNewMailboxSpace
Next Dismount the Database
1Dismount-Database DB1 -confirm:$falseAfter the Mailbox Database has been Dismounted navigate to the install directory where Exchange has been installed.
1C:\Program Files\Microsoft\Exchange Server\V15\Bin>eseutil.exeOpen eseutil to view the available switches.
Next I will run the Defrag on Dismounted Database called “DB1”
C:\Program Files\Microsoft\Exchange Server\V15\Bin>eseutil.exe /d D:\DB\DB1.edb
Lets go ahead and Mount the Defragmented Database
1Mount-Database DB1Next lets view the available whitespace after the Defragmentation of the Database.
1Get-MailboxDatabase DB1 -Status | Format-List Name, DatabaseSize, AvailableNewMailboxSpaceThe amount of time it takes to run the Defragmentation of a Mailbox Database depends on the size of the Database and the Hardware its running from.
This has been copied for my own reference as it is super handy and exactly what I need to reference in the future. Taken from https://www.thatlazyadmin.com/2017/07/25/reduce-exchange-2016-mailbox-database-size-using-eseutil/
(Thanks for the perfect details. Needed my own copy as I can’t afford to miss this in the future)
Other Useful References is this for the database specifics: https://www.datanumen.com/blogs/depth-understanding-system-mailboxes-exchange-server/
This post Reduce Exchange 2016 Mailbox Database White Space first appeared on InteractiveWebs and is written by %AUTHOR%
InteractiveWebs iPhone Add Email Account Exchange asks to sign in to Office 365
The issue arrives when you are attempting to configure a new exchange account that is not hosted on office 365 but you keep getting a prompt to sign into office 365 mail account when you attempt to add a new account for a domain.
The can be very confusing as if you have host your own services, or have a host that knows what they are doing, you can have an exchange email account that is not part of office 356 but yet you will see the prompt for sign into office 365 when attempting to set up the account on apple devices. It could well be that it also prompts with non apple devices, we did not bother to test.
The issue likely arises from the situation where office 365 services were once associated with the email domain in question. Commonly services like Godaddy will allow it’s users to simply click a few buttons and sell office 365 email services. If a user has previously had an office 365 email service, then this needs to be removed from the service that originally set this up.
We have done extensive checking with DNS and Autodiscover settings and found that even when DNS correctly has the new discovery settings of the own premises Exchange server, or third party Exchange comparable email service (SmarterMail etc) that it will still fail to find the server for login, but will offer previously activated office 365 services for login.
Apple products appear to do a check on office 365 for the domain, and if they find a provisioned service, they will offer that login without a way to skip.
The only way we found to solve this is to unsubscribe and remove the provision of any and all email accounts on that domain name in question. In Godaddy it was a matter of removing the configured email accounts. This took about 15 minutes to complete and remove domain associated records from the office 365 service. Once complete a signing was accepted as expected.
This post iPhone Add Email Account Exchange asks to sign in to Office 365 first appeared on InteractiveWebs and is written by %AUTHOR%
InteractiveWebs WHM Cpanel Other Usage‡ folder large in size – Solution
If you are an WHM / Cpanel Admin and receive a waring that looks something like this :
The backup process on “your.server” failed. The backup failed to complete for the following reason: Available disk space (17 percent) is too low. Backups will not run. Start Time:Saturday, August 7, 2021 at 4:00:02 PM UTCEnd Time:Saturday, August 7, 2021 at 4:00:03 PM UTCRun Time:1 secondThis notice is the result of a request from “cPanel Backup System”.The system generated this notice on Saturday, August 7, 2021 at 4:00:03 PM UTC.“Backup Failed To Finish” notifications are currently configured to have an importance of “High”. You can change the importance or disable this type of notification in WHM’s Contact Manager at:
The backup process on “your.server” failed. The backup failed to complete for the following reason:
Available disk space (17 percent) is too low. Backups will not run.
Start Time:Saturday, August 7, 2021 at 4:00:02 PM UTCEnd Time:Saturday, August 7, 2021 at 4:00:03 PM UTCRun Time:1 secondThis notice is the result of a request from “cPanel Backup System”.The system generated this notice on Saturday, August 7, 2021 at 4:00:03 PM UTC.“Backup Failed To Finish” notifications are currently configured to have an importance of “High”. You can change the importance or disable this type of notification in WHM’s Contact Manager at:
This is a warning that the disk space is too low, and you may be experiencing an open session problem with one of your Cpanel site.
If you know the likely site at cause, you can login to WHM / then open the Cpanel account in question and view the Disk Usage report. Pay Attention to the “Other Usage‡” report.
These other usage files include email services for the account and usually should never be bigger than a couple of GB at most. In our case it looked like this:
Significantly higher than it should be and the cause of the missing hard disk space that is used for backup process.
To diagnose this we use a small application called “Ncdu” that we installed following the instructions found here: https://computingforgeeks.com/ncdu-analyze-disk-usage-in-linux-with-ncdu/
This is through an SSH connection as a root user to the server in question. Then following the instructions in the above link, we use the
“ncdu -x /“
command to search the root folder for large sub folders. What we found what that the /var/cpanel/php/sessions/ea-php71 folder was over 20 GB in size.
This folder is used for the sessions in that version of php that in our case related to one account that needs to use the older php version for a Magento site. It appears that sessions are being left open and we need to delete the folder in question to erase the previous opened sessions taking up room.
The easy way to do this is using the “d” command in NCDS. Again following the directions.
Alternative method is through a WHM file browser plugin that can be installed Config server Explorer cse found here: https://www.configserver.com/cp/cse.html
This then allows you to use the file browser from WHM Plugins folder to erase the /var/cpanel/php/sessions folder without the need to mess around in SSH (Other than the setup).
This post WHM Cpanel Other Usage‡ folder large in size – Solution first appeared on InteractiveWebs and is written by %AUTHOR%
InteractiveWebs WHM Nginx Manager Enabled Gives 413 Request Entity Too Large Error and Solution
Recently WHM / Cpanel enabled an option for Nginx Manager
With this option enabled in Cpanel you may receive a 413 Error when uploading large files like plugins to your WordPress Site.
Not it’s worth mentioning that there are potentially a couple of places that the size upload limit is set. This error relates specifically to the Nginx Manager options for caching in WHM and Cpanel that was added as an option in 2021.
There are also options in the MultiPHP INI Editor to se the upload_max_filesize that we are presuming you have already set in your WHM configuration panel.
So assuming that you have previously set these limits and have had no trouble uploading things like large plugins for WordPress or images etc, then you will likely find that the error for 413 is related to the enabling of Nginx in the recent whm update.
To fix the issue you will need to edit the Nginx.conf file on your web server. This file among other things controls the configuration settings for allowing larger uploads. It has defaulted to a small size to avoid denial of service attacks where people try to throw large files at your site and block it from general access.
Editing conf files can be done using various linux editors. We suggest Nano as it is easy for those who like me are simple. If nano is not installed you can follow these instructions here: https://phoenixnap.com/kb/use-nano-text-editor-commands-linux
Login to your WHM Management and search for Terminal and select it. (Note that if you have the skills you can also connect to your server using your favourite Terminal software as a root user, we are using the web interface here as it is easier).
Use nano text editor:$ sudo nano /etc/nginx/nginx.conf
Add the following line to the end of the http or server or location context to increase the size of the Nginx.conf size limit
# set client body size to 20M #client_max_body_size 20M;
Once you have edited the file you can Exit using the Control X keys.
Then search for in WHM your Nginx Manager.
Click on the Restart Nginx button.
That is it and should get you running.
This post WHM Nginx Manager Enabled Gives 413 Request Entity Too Large Error and Solution first appeared on InteractiveWebs and is written by %AUTHOR%