Unlimited IP Pool
Cost Effective IP Pool
Unlimited IP Pool
Cost Effective IP Pool
Data Sourcing for LLMs & ML
Accelerate ventures securely
Proxy selection for complex cases
Some other kind of copy
Protect your brand on the web
Reduce ad fraud risks
Whether you shop online or own an online store, monitoring product prices can help you save money and give you a competitive edge, especially during sales or promotions. Many e-commerce enthusiasts use browser extensions to track the prices of their favorite products to get the best deals. However, with the limitations around browser extension-based price tracking, it’s easy to see why proxy-based price tracking is a more suitable option.
In this guide, you’ll learn how to build your own product price drop alert system using residential proxies from The Social Proxy. We’ll walk you through a step-by-step process of building, testing, and visualizing product price changes on Amazon and sending alerts when prices drop by 20% or more. You’ll be able to reliably scrape price data at any frequency without getting blocked by e-commerce.
There are several benefits to tracking product prices. As an online shopper looking for the best deals or an online store owner trying to stay competitive, you’ll be granted the satisfaction of knowing you’re getting the best value for your money or providing that for your customers. With a 20% price drop alert mechanism, you can respond to market changes, capitalize on discounts, and make data-driven decisions to drive growth for your business.
Price drop alerts allow shoppers to stay informed and purchase products at a lower cost, saving time and money. This can be especially useful for products prone to fluctuations such as electronics or fashion items. You can set price drop alerts for added convenience and receive notifications when prices drop to avoid hassle and gain peace of mind.
Meanwhile, if you run an e-commerce business, price drop alerts can help you keep your finger on the pulse and adjust your pricing strategy competitively and in a way that best serves you. Timing is crucial when it comes to jumping on an opportunity to promote sales and boost revenue.
With price drop alerts, you can also get real-time information that can help you reduce stockouts. This happens when you get notified about when prices drop on products that are running low in stock. This information gives you a headstart and signals you to restock before in time to avoid losing sales. This in turn helps retain customers and maintain customer satisfaction.
We’ll use the requests and BeautifulSoup libraries to scrape Amazon products. This tutorial will reference the Acer Aspire Go 15 Slim Laptop on Amazon. Feel free to replace the Amazon product URL with the URL of a product of your choice.
To scrape the prices and build the price tracker, we’ll use the following code:
import requests
from bs4 import BeautifulSoup
# Define the URL of the product
amazon_url = "https://www.amazon.com/Display-i3-N305-Graphics-Windows-AG15-31P-3947/dp/B0CV5ZSR17/"
# Headers to mimic a real browser request
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36",
"Accept-Language": "en-US,en;q=0.9",
"Accept-Encoding": "gzip, deflate, br",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8",
"Connection": "keep-alive",
"DNT": "1", # Do Not Track
}
# Function to get the page content (without proxy)
def get_amazon_product_page(url, headers):
try:
# Make the request without proxy
response = requests.get(url, headers=headers)
response.raise_for_status() # Check if the request was successful
return response.text
except requests.exceptions.HTTPError as http_err:
print(f"HTTP error occurred: {http_err}")
except Exception as err:
print(f"An error occurred: {err}")
return None
# Function to parse product details from Amazon's HTML page
def parse_amazon_product(html_content):
# Use 'html.parser' since it is built-in
soup = BeautifulSoup(html_content, "html.parser")
# Extract product title
title = soup.find(id="productTitle")
if title:
title = title.get_text(strip=True)
# Extract product price
price = soup.find("span", {"class": "a-price-whole"})
if price:
price = price.get_text(strip=True)
# Extract product rating
rating = soup.find("span", {"class": "a-icon-alt"})
if rating:
rating = rating.get_text(strip=True)
# Print extracted data
print("Title:", title)
print("Price:", price)
print("Rating:", rating)
if __name__ == "__main__":
# Get the page HTML content
html_content = get_amazon_product_page(amazon_url, headers)
if html_content:
# Parse and extract product information
parse_amazon_product(html_content)
This script scrapes the Acer laptop page to extract and display the product’s title, price, and rating. It sends a request to the Amazon URL using specific headers to mimic a real browser, which helps minimize the risk of getting blocked. After fetching the page, the script parses the HTML using BeautifulSoup and prints the extracted details without using any proxies.
If the request is successful and the page doesn’t return a CAPTCHA or block, your output should resemble the one below:
To avoid violating Amazon terms of service and getting blocked, you’ll need to integrate your code with residential proxies. Create an account with The Social Proxy. For a step-by-step guide on how to set up residential proxies, follow this article. Your residential proxy credentials include the host, port, username, and password.
Incorporate your residential proxy credentials to scrape product prices with the code below:
import requests
from bs4 import BeautifulSoup
# Define the URL of the product
amazon_url = "https://www.amazon.com/Display-i3-N305-Graphics-Windows-AG15-31P-3947/dp/B0CV5ZSR17/"
# Proxy information (replace with your actual proxy credentials)
proxy = {
"http": "http://username:password@host:port",
"https": "http://username:password@host:port"
}
# Headers to mimic a real browser request
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36",
"Accept-Language": "en-US,en;q=0.9",
"Accept-Encoding": "gzip, deflate, br",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8",
"Connection": "keep-alive",
"DNT": "1", # Do Not Track
}
# Function to get the page content via the proxy
def get_amazon_product_page(url, proxy, headers):
try:
# Make the request through the proxy
response = requests.get(url, proxies=proxy, headers=headers)
response.raise_for_status() # Check if the request was successful
return response.text
except requests.exceptions.HTTPError as http_err:
print(f"HTTP error occurred: {http_err}")
except requests.exceptions.ProxyError as proxy_err:
print(f"Proxy error occurred: {proxy_err}")
except Exception as err:
print(f"An error occurred: {err}")
return None
# Function to parse product details from Amazon's HTML page
def parse_amazon_product(html_content):
# Use 'html.parser' since it is built-in
soup = BeautifulSoup(html_content, "html.parser")
# Extract product title
title = soup.find(id="productTitle")
if title:
title = title.get_text(strip=True)
# Extract product price
price = soup.find("span", {"class": "a-price-whole"})
if price:
price = price.get_text(strip=True)
# Extract product rating
rating = soup.find("span", {"class": "a-icon-alt"})
if rating:
rating = rating.get_text(strip=True)
# Print extracted data
print("Title:", title)
print("Price:", price)
print("Rating:", rating)
if __name__ == "__main__":
# Get the page HTML content
html_content = get_amazon_product_page(amazon_url, proxy, headers)
if html_content:
# Parse and extract product information
parse_amazon_product(html_content)
This code scrapes the product title, price, and rating from an Amazon product page using a proxy with authentication. The request gets made through the proxy while mimicking a real browser with headers, and the HTML content gets fetched from the provided Amazon URL. Afterward, BeautifulSoup is used to parse the page and extract the product’s title, price, and rating, which are then printed.
Your output should resemble the image below.
This output shows that the price of the Acer laptop is $249.
Use the schedule library to set up a job that checks prices every 24 hours.
Install the schedule library if you haven’t already:
pip install schedule
Modify your Python script to include scheduling using the following code:
import requests
from bs4 import BeautifulSoup
import schedule
import time
# Define the URL of the product
amazon_url = "https://www.amazon.com/Display-i3-N305-Graphics-Windows-AG15-31P-3947/dp/B0CV5ZSR17/"
# Proxy information (replace with your actual proxy credentials)
proxy = {
"http": "http://username:password@host:port",
"https": "http://username:password@host:port"
}
# Headers to mimic a real browser request
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36",
"Accept-Language": "en-US,en;q=0.9",
"Accept-Encoding": "gzip, deflate, br",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8",
"Connection": "keep-alive",
"DNT": "1", # Do Not Track
}
# Function to get the page content via the proxy
def get_amazon_product_page(url, proxy, headers):
try:
# Make the request through the proxy
response = requests.get(url, proxies=proxy, headers=headers)
response.raise_for_status() # Check if the request was successful
return response.text
except requests.exceptions.HTTPError as http_err:
print(f"HTTP error occurred: {http_err}")
except requests.exceptions.ProxyError as proxy_err:
print(f"Proxy error occurred: {proxy_err}")
except Exception as err:
print(f"An error occurred: {err}")
return None
# Function to parse product details from Amazon's HTML page
def parse_amazon_product(html_content):
# Use 'html.parser' since it is built-in
soup = BeautifulSoup(html_content, "html.parser")
# Extract product title
title = soup.find(id="productTitle")
if title:
title = title.get_text(strip=True)
# Extract product price
price = soup.find("span", {"class": "a-price-whole"})
if price:
price = price.get_text(strip=True)
# Extract product rating
rating = soup.find("span", {"class": "a-icon-alt"})
if rating:
rating = rating.get_text(strip=True)
# Print extracted data
print("Title:", title)
print("Price:", price)
print("Rating:", rating)
# Function to check the price every 24 hours
def check_price_job():
print("Checking price...")
html_content = get_amazon_product_page(amazon_url, proxy, headers)
if html_content:
parse_amazon_product(html_content)
# Schedule the job to run every 24 hours
schedule.every(24).hours.do(check_price_job)
# Keep the script running
while True:
schedule.run_pending()
time.sleep(1)
This script uses a proxy to scrape the product title, price, and rating from an Amazon product page every 24 hours. The schedule library is used to set up a recurring job, which sends a request through the proxy, fetches the page content, and extracts the necessary product details. The script runs in an infinite loop to ensure that the scheduled task continues to check the price daily.
If a price is found, it will be printed:
Otherwise, it logs nothing or any errors encountered during the process:
Now that your price tracker is set up, let’s implement an alert system to notify when prices drop by 20% or more. We’ll use the smtplib library to send email alerts, but you can also use other notification services like SendGrid or Mailgun for more secure solutions.
Ensure you have the smtplib library installed. You can do that using pip:
pip install smtplib
Also, make sure you have access to an email account that supports SMTP (like Gmail). If you’re using Gmail, you might need to enable “Less secure app access” in your Google account settings or use an app password.
Next, update your script with the following code, replace the email credentials with yours, and set the correct SMTP server and port according to your email provider:
import requests
from bs4 import BeautifulSoup
import schedule
import time
import smtplib
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
# Email credentials (update with your credentials)
EMAIL = "your_email@gmail.com"
PASSWORD = "your_email_password"
TO_EMAIL = "recipient_email@gmail.com"
# Define the URL of the product
amazon_url = "https://www.amazon.com/Display-i3-N305-Graphics-Windows-AG15-31P-3947/dp/B0CV5ZSR17/"
# Proxy information (replace with your actual proxy credentials)
proxy = {
"http": "http://username:password@host:port",
"https": "http://username:password@host:port"
}
# Headers to mimic a real browser request
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36",
"Accept-Language": "en-US,en;q=0.9",
"Accept-Encoding": "gzip, deflate, br",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8",
"Connection": "keep-alive",
"DNT": "1", # Do Not Track
}
# Initialize the previous price for comparison
previous_price = None
# Function to get the page content via the proxy
def get_amazon_product_page(url, proxy, headers):
try:
response = requests.get(url, proxies=proxy, headers=headers)
response.raise_for_status() # Check if the request was successful
return response.text
except requests.exceptions.HTTPError as http_err:
print(f"HTTP error occurred: {http_err}")
except requests.exceptions.ProxyError as proxy_err:
print(f"Proxy error occurred: {proxy_err}")
except Exception as err:
print(f"An error occurred: {err}")
return None
# Function to parse product details from Amazon's HTML page
def parse_amazon_product(html_content):
global previous_price
soup = BeautifulSoup(html_content, "html.parser")
# Extract product title
title = soup.find(id="productTitle")
if title:
title = title.get_text(strip=True)
# Extract product price
price = soup.find("span", {"class": "a-price-whole"})
if price:
price = price.get_text(strip=True).replace(',', '') # Remove commas from the price
price = float(price) # Convert to float for comparison
# Print extracted data
print("Title:", title)
print("Price:", price)
# Check if the price has dropped by 20% or more
if previous_price is not None:
price_drop_percentage = ((previous_price - price) / previous_price) * 100
if price_drop_percentage >= 20:
send_email_alert(title, previous_price, price)
# Update previous price
previous_price = price
# Function to send email alert
def send_email_alert(title, old_price, new_price):
try:
# Set up the email
msg = MIMEMultipart()
msg['From'] = EMAIL
msg['To'] = TO_EMAIL
msg['Subject'] = f"Price Drop Alert: {title}"
body = f"The price of '{title}' has dropped!\n\nOld Price: ${old_price}\nNew Price: ${new_price}\n\nCheck it out here: {amazon_url}"
msg.attach(MIMEText(body, 'plain'))
# Send the email
server = smtplib.SMTP('smtp.gmail.com', 587)
server.starttls()
server.login(EMAIL, PASSWORD)
text = msg.as_string()
server.sendmail(EMAIL, TO_EMAIL, text)
server.quit()
print(f"Price drop alert sent to {TO_EMAIL}!")
except Exception as e:
print(f"Failed to send email: {e}")
# Function to check the price every 24 hours
def check_price_job():
print("Checking price...")
html_content = get_amazon_product_page(amazon_url, proxy, headers)
if html_content:
parse_amazon_product(html_content)
# Schedule the job to run every 24 hours
schedule.every(24).hours.do(check_price_job)
# Keep the script running
while True:
schedule.run_pending()
time.sleep(1)
The code monitors the price of the Acer laptop using a proxy and checks for a price drop of 20% or more. It sends an email alert via SMTP (configured with your email credentials) when such a drop occurs, including details like the old price, new price, and a link to the product. The program will run continuously and check the product’s price every 24 hours, using the schedule library to handle the time intervals.
To test our price tracker and alert system, we’ll set up a sample dataset with 10 Amazon products and track their prices over 14 days.
First, install matplotlib:
pip install matplotlib
Update your script to fetch product prices, simulate fluctuations, track price changes, generate a graph, and send alerts for significant price drops:
import requests
from bs4 import BeautifulSoup
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import random
import smtplib
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
# Sample proxy (replace with your own proxy details)
proxy = {
"http": "http://username:password@host:port",
"https": "http://username:password@host:port"
}
# Headers to mimic a real browser request
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36",
"Accept-Language": "en-US,en;q=0.9",
"Accept-Encoding": "gzip, deflate, br",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8",
"Connection": "keep-alive",
"DNT": "1"
}
# Sample dataset of 10 products with their URLs (mocked for demonstration)
products = [
{"name": "Hanes Breathable ComfortFlex", "url": "https://www.amazon.com/Hanes-Breathable-ComfortFlex-Waistband-Multipack/dp/B086KSDTQ4/"},
{"name": "PreserVision AREDS Supplement", "url": "https://www.amazon.com/PreserVision-AREDS-Vitamin-Mineral-Supplement/dp/B00DJUK8HS/"},
{"name": "Nature's Bounty Probiotics", "url": "https://www.amazon.com/Natures-Bounty-Probiotics-Supplement-Acidophilus/dp/B004JO3JTM/"},
{"name": "AmazonBasics Batteries", "url": "https://www.amazon.com/AmazonBasics-Performance-Alkaline-Batteries-Count/dp/B00MNV8E0C/"},
{"name": "Tree of Life Hyaluronic Acid", "url": "https://www.amazon.com/Tree-Life-Hyaluronic-Brightening-Hydrating/dp/B014PGEEO2/"},
{"name": "Massage Gun", "url": "https://www.amazon.com/Massage-Tissue-Percussion-Massager-Athletes/dp/B09JBCSC7H/"},
{"name": "Fruit of the Loom Briefs", "url": "https://www.amazon.com/Fruit-Loom-Brief-12-Pack-Assorted-Heathers/dp/B086Z331TY/"},
{"name": "Probiotics Supplement", "url": "https://www.amazon.com/Probiotics-Formulated-Probiotic-Supplement-Acidophilus/dp/B079H53D2B/"},
{"name": "Duracell CopperTop Batteries", "url": "https://www.amazon.com/Duracell-CopperTop-Batteries-All-Purpose-Household/dp/B004K95PBQ/"},
{"name": "Gloria Vanderbilt Jeans", "url": "https://www.amazon.com/Gloria-Vanderbilt-Classic-Tapered-Scottsdale/dp/B01KO2GU3Y/"},
]
# Function to get product page via proxy
def get_product_page(url):
try:
response = requests.get(url, proxies=proxy, headers=headers)
response.raise_for_status()
return response.text
except Exception as e:
print(f"Failed to fetch product page: {e}")
return None
# Function to simulate fetching Amazon product price from HTML using BeautifulSoup
def parse_amazon_price(html_content):
soup = BeautifulSoup(html_content, 'html.parser')
# Simulate finding the product price (mock for demo purposes)
price_tag = soup.find('span', {'class': 'a-price-whole'})
if price_tag:
return float(price_tag.get_text(strip=True).replace(',', ''))
# Mock return if parsing fails (simulating real prices)
return round(random.uniform(50, 600), 2)
# Function to simulate price fluctuations
def simulate_price_change(current_price):
change = random.uniform(-0.1, 0.1) * current_price # price changes between -10% to +10%
return round(current_price + change, 2)
# Log price changes over 14 days
days = 14
price_data = {product['name']: [100.0] for product in products} # Start with a mocked initial price of $100
for day in range(1, days + 1):
for product in products:
new_price = simulate_price_change(price_data
#137557 – Managed Pool
€
-
Multi-location license*
-
Private & unlimited connection
-
Multiple carriers
-
IP rotation enabled
-
API enabled
-
Same IP used by real users
-
Highest IP trust score
Select options
This product has multiple variants. The options may be chosen on the product page
][-1])
price_data
#137557 – Managed Pool
€
-
Multi-location license*
-
Private & unlimited connection
-
Multiple carriers
-
IP rotation enabled
-
API enabled
-
Same IP used by real users
-
Highest IP trust score
Select options
This product has multiple variants. The options may be chosen on the product page
].append(new_price)
# Convert to DataFrame for easier plotting
price_df = pd.DataFrame(price_data)
price_df['Day'] = np.arange(0, days + 1)
# Plotting the price changes
def plot_price_changes(df):
plt.figure(figsize=(10, 6))
for product in df.columns[:-1]: # Exclude 'Day'
plt.plot(df['Day'], df, label=product)
plt.title('Price Fluctuations Over 14 Days')
plt.xlabel('Day')
plt.ylabel('Price')
plt.legend(loc='upper right')
plt.grid(True)
plt.show()
# Generate the graph showing price changes
plot_price_changes(price_df)
# Simulating a forced price drop and showing the alert mechanism
def send_email_alert(product_name, initial_price, current_price, url):
# Mock sending an email alert
print(f"ALERT: {product_name} has dropped from ${initial_price:.2f} to ${current_price:.2f}. Check it here: {url}")
# Check for price drops and trigger alerts
def check_for_price_drops(df):
for product in products:
initial_price = df
#137557 – Managed Pool
€
-
Multi-location license*
-
Private & unlimited connection
-
Multiple carriers
-
IP rotation enabled
-
API enabled
-
Same IP used by real users
-
Highest IP trust score
Select options
This product has multiple variants. The options may be chosen on the product page
][0]
current_price = df
#137557 – Managed Pool
€
-
Multi-location license*
-
Private & unlimited connection
-
Multiple carriers
-
IP rotation enabled
-
API enabled
-
Same IP used by real users
-
Highest IP trust score
Select options
This product has multiple variants. The options may be chosen on the product page
].iloc[-1] # Get latest price (14th day)
# Calculate percentage drop
price_drop_percentage = ((initial_price - current_price) / initial_price) * 100
# If price dropped by 20% or more, trigger alert
if price_drop_percentage >= 20:
send_email_alert(product['name'], initial_price, current_price, product['url'])
# Re-run the check on day 14
check_for_price_drops(price_df)
# Force a price drop for one of the products for demonstration (if none occurs)
def force_price_drop(df):
product_to_drop = random.choice(df.columns[:-1]) # Select a random product
df.loc[days, product_to_drop] = df.loc[0, product_to_drop] * 0.75 # Simulate a 25% drop in price
# Force a price drop for demonstration
force_price_drop(price_df)
# Plot the forced price changes
plot_price_changes(price_df)
# Check for price drops again after forcing it
check_for_price_drops(price_df)
The code simulates the tracking of prices for a sample dataset of 10 products over 14 days. It uses a proxy to fetch product pages and uses BeautifulSoup to parse prices, although the actual price retrieval is simulated with random data. The script logs price fluctuations, generates a line graph to visualize these changes, and sends email alerts for significant price drops (20% or more) with additional functionality to simulate a price drop for demonstration purposes.
Note: Before running the price tracker and alert system, make sure to replace the placeholders in the code. Update the proxy details with your actual proxy details and replace the sample product URLs in the products list with actual Amazon product URLs.
After running the visualization, you should get something that looks like this, depending on the products you use.
In this article, you‘ve learned how to build a customizable product price tracker with an alert system using The Social Proxy’s residential proxies. With this system, you can monitor price changes on Amazon and receive notifications when prices drop by 20% or more. You can also consider adapting the code to track prices on other e-commerce platforms like eBay or Etsy. With this, you can create a powerful price tracking system that helps you save money and stay ahead of the competition.
If you’re ready to start leveraging proxies for your e-commerce price tracking, sign up for The Social Proxy’s residential proxy today and get free access to the dashboard.