1688 平臺(tái)的店鋪商品數(shù)據(jù)是供應(yīng)鏈分析與競(jìng)品調(diào)研的核心依據(jù),包含批發(fā)價(jià)、起訂量、品類分布等關(guān)鍵信息。與單商品接口相比,店鋪全商品接口需解決分頁(yè)加載、品類篩選、反爬限制等特殊挑戰(zhàn)。本文系統(tǒng)闡述 1688 店鋪全商品接口的技術(shù)實(shí)現(xiàn)方案,重點(diǎn)突破店鋪 ID 解析、多頁(yè)數(shù)據(jù)采集、品類精準(zhǔn)篩選等核心問(wèn)題,提供一套合規(guī)高效的技術(shù)架構(gòu),嚴(yán)格遵循平臺(tái)規(guī)則與數(shù)據(jù)采集規(guī)范。
一、1688 店鋪商品接口架構(gòu)與合規(guī)準(zhǔn)則
1688 店鋪商品數(shù)據(jù)通過(guò) "店鋪首頁(yè)→商品列表頁(yè)→分頁(yè)加載" 的層級(jí)架構(gòu)呈現(xiàn),核心接口為店鋪商品列表分頁(yè)接口,支持按品類、銷量等多維度篩選。技術(shù)實(shí)現(xiàn)需遵循以下合規(guī)準(zhǔn)則:
- 請(qǐng)求頻率控制:?jiǎn)蔚赇伈杉璞3帧?5 秒的頁(yè)面請(qǐng)求間隔,單日最大采集次數(shù)不超過(guò) 3 次
- 數(shù)據(jù)范圍限制:僅采集公開(kāi)商品信息,嚴(yán)禁獲取店鋪交易數(shù)據(jù)、客戶信息等隱私內(nèi)容
- 商業(yè)用途合規(guī):數(shù)據(jù)僅限市場(chǎng)調(diào)研使用,不得用于惡意競(jìng)爭(zhēng)或商業(yè)詆毀行為
- 反爬機(jī)制尊重:不偽造請(qǐng)求頭或破解接口加密,完全模擬正常用戶瀏覽行為
- 店鋪全商品采集核心技術(shù)流程:
店鋪ID解析 → 首頁(yè)品類提取 → 分頁(yè)參數(shù)構(gòu)造 → 分布式請(qǐng)求調(diào)度 → 數(shù)據(jù)解析與去重 → 結(jié)構(gòu)化存儲(chǔ)
二、核心技術(shù)實(shí)現(xiàn)方案
1. 店鋪 ID 解析器(適配 1688 URL 特色格式)
1688 店鋪 URL 格式多樣,需從不同 URL 格式中精準(zhǔn)解析店鋪唯一標(biāo)識(shí)(memberId)并提取基礎(chǔ)信息:
import reimport requestsfrom lxml import etreeclass AlibabaShopParser: """1688店鋪信息與ID解析器""" def __init__(self): self.headers = { "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Referer": "https://www.1688.com/" } # 店鋪URL匹配模式 self.shop_patterns = [ r"https?://(\w+)\.1688\.com", # 主域名模式:https://abc123.1688.com r"https?://shop(\d+)\.1688\.com", # 數(shù)字ID模式:https://shop123456789.1688.com r"https?://www\.1688\.com/shop/view_shop\.htm\?memberId=(\w+)" # 標(biāo)準(zhǔn)店鋪?lái)?yè) ] def extract_shop_id(self, shop_url): """從店鋪URL提取memberId(店鋪唯一標(biāo)識(shí))""" for pattern in self.shop_patterns: match = re.search(pattern, shop_url) if match: return match.group(1) # URL直接解析失敗,嘗試從頁(yè)面內(nèi)容提取 return self._extract_id_from_page(shop_url) def _extract_id_from_page(self, shop_url): """從店鋪?lái)?yè)面內(nèi)容提取memberId""" try: response = requests.get( shop_url, headers=self.headers, timeout=15, allow_redirects=True ) response.encoding = "utf-8" # 從meta標(biāo)簽提取 tree = etree.HTML(response.text) member_id_meta = tree.xpath('//meta[@name="memberId"]/@content') if member_id_meta and member_id_meta[0]: return member_id_meta[0] # 從腳本標(biāo)簽提取 scripts = tree.xpath('//script/text()') for script in scripts: match = re.search(r'memberId\s*[:=]\s*["\'](\w+)["\']', script) if match: return match.group(1) return None except Exception as e: print(f"頁(yè)面提取店鋪ID失敗: {str(e)}") return None
2. 分頁(yè)參數(shù)生成器(適配 B 端分頁(yè)邏輯)
1688 店鋪商品采用特殊分頁(yè)機(jī)制,不同排序方式和篩選條件對(duì)應(yīng)不同參數(shù)規(guī)則:
import timeimport randomimport hashlibimport urllib.parseclass AlibabaShopProductParamsGenerator: """1688店鋪商品分頁(yè)參數(shù)生成器""" def __init__(self): self.base_url = "https://offerlist.1688.com/offerlist.htm" # 排序方式映射 self.sort_mapping = { "default": "", # 默認(rèn)排序 "newest": "create_desc", # 最新上架 "price_asc": "price_asc", # 價(jià)格從低到高 "price_desc": "price_desc", # 價(jià)格從高到低 "sales": "volume_desc" # 銷量從高到低 } def generate_params(self, member_id, page=1, sort="default", category_id="", **filters): """ 生成店鋪商品列表請(qǐng)求參數(shù) :param member_id: 店鋪memberId :param page: 頁(yè)碼 :param sort: 排序方式 :param category_id: 分類ID(空表示全部) :param filters: 篩選條件,支持: - min_price: 最低價(jià)格 - max_price: 最高價(jià)格 - is_wholesale: 是否批發(fā)(True/False) :return: 完整參數(shù)字典 """ params = { "memberId": member_id, "pageNum": page, "pageSize": 60, # 每頁(yè)最大商品數(shù) "sortType": self.sort_mapping.get(sort, ""), "categoryId": category_id, "offline": "false", # 只顯示在線商品 "sample": "false", # 不顯示樣品 "isNoReload": "true", "enableAsync": "true", "async": "true", "_input_charset": "UTF-8", "timestamp": str(int(time.time() * 1000)), "rn": str(random.randint(1000000000, 9999999999)) } # 添加價(jià)格篩選 if "min_price" in filters and filters["min_price"]: params["priceStart"] = filters["min_price"] if "max_price" in filters and filters["max_price"]: params["priceEnd"] = filters["max_price"] # 添加批發(fā)篩選 if "is_wholesale" in filters and filters["is_wholesale"]: params["wholesale"] = "true" # 生成簽名(部分接口需要) if random.random() > 0.5: # 模擬部分請(qǐng)求需要簽名的場(chǎng)景 params["sign"] = self._generate_sign(params) return params
3. 請(qǐng)求調(diào)度器(應(yīng)對(duì) B 端反爬限制)
針對(duì) 1688 嚴(yán)格的反爬限制,實(shí)現(xiàn)會(huì)話保持、代理輪換、請(qǐng)求間隔控制等策略:
import timeimport randomimport requestsfrom fake_useragent import UserAgentimport urllib3# 禁用不安全請(qǐng)求警告urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)class AlibabaShopProductRequester: """1688店鋪商品請(qǐng)求調(diào)度器""" def __init__(self, proxy_pool=None): self.proxy_pool = proxy_pool or [] self.ua = UserAgent() self.session = self._init_session() self.last_request_time = 0 self.min_interval = 15 # 頁(yè)面請(qǐng)求最小間隔(秒) self.max_retries = 3 # 最大重試次數(shù) def _init_session(self): """初始化會(huì)話,獲取基礎(chǔ)Cookie""" session = requests.Session() session.headers.update({ "User-Agent": self.ua.random, "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language": "zh-CN,zh;q=0.9", "Connection": "keep-alive", "Referer": "https://www.1688.com/", "Upgrade-Insecure-Requests": "1" }) # 預(yù)訪問(wèn)1688首頁(yè)獲取必要Cookie session.get("https://www.1688.com", verify=False, timeout=10) return session def _control_request_interval(self): """控制請(qǐng)求間隔,避免觸發(fā)反爬""" current_time = time.time() elapsed = current_time - self.last_request_time if elapsed < self.min_interval: sleep_time = self.min_interval - elapsed + random.uniform(2, 5) print(f"請(qǐng)求間隔不足,休眠 {sleep_time:.1f} 秒") time.sleep(sleep_time) self.last_request_time = time.time()
4. 商品數(shù)據(jù)解析器(提取 B 端特色字段)
解析商品列表頁(yè)面,提取批發(fā)價(jià)、起訂量、銷量等 B 端特色數(shù)據(jù),處理分頁(yè)信息:
import reimport jsonfrom datetime import datetimefrom lxml import etreeclass AlibabaShopProductParser: """1688店鋪商品數(shù)據(jù)解析器""" def __init__(self): # 匹配商品數(shù)據(jù)的正則表達(dá)式 self.product_data_pattern = re.compile(r'window\.__page__data__\s*=\s*({.*?});\s*</script>', re.DOTALL) self.offer_list_pattern = re.compile(r'offerList\s*:\s*(\[.*?\])', re.DOTALL) def parse_products_page(self, html_content): """解析店鋪商品列表頁(yè)面""" if not html_content: return None # 嘗試從頁(yè)面提取JSON數(shù)據(jù) json_data = self._extract_json_data(html_content) if json_data: return self._parse_from_json(json_data) # JSON解析失敗,嘗試從HTML解析 return self._parse_from_html(html_content) def _extract_json_data(self, html_content): """從頁(yè)面提取JSON數(shù)據(jù)""" match = self.product_data_pattern.search(html_content) if match: try: return json.loads(match.group(1)) except json.JSONDecodeError: print("JSON數(shù)據(jù)解析失敗") # 嘗試提取簡(jiǎn)化的商品列表數(shù)據(jù) match = self.offer_list_pattern.search(html_content) if match: try: return {"offerList": json.loads(match.group(1))} except json.JSONDecodeError: print("商品列表數(shù)據(jù)解析失敗") return None
5. 分類采集器(支持多線程并行采集)
基于線程池實(shí)現(xiàn)按分類并行采集,提高采集效率同時(shí)控制資源占用:
from concurrent.futures import ThreadPoolExecutor, as_completedimport timeclass AlibabaShopCategoryCollector: """店鋪商品分類采集器""" def __init__(self, requester, parser, params_generator): self.requester = requester self.parser = parser self.params_generator = params_generator self.max_workers = 2 # 分類采集并發(fā)數(shù)(不宜過(guò)高) def collect_by_category(self, member_id, categories, max_pages_per_cat=3): """ 按分類采集店鋪商品 :param member_id: 店鋪ID :param categories: 分類列表(從AlibabaShopParser獲?。? :param max_pages_per_cat: 每個(gè)分類最大采集頁(yè)數(shù) :return: 合并后的商品列表 """ if not categories: print("沒(méi)有分類信息,無(wú)法按分類采集") return None all_products = [] category_results = {} with ThreadPoolExecutor(max_workers=self.max_workers) as executor: # 提交分類采集任務(wù) future_tasks = {} for cat in categories: future = executor.submit( self._collect_single_category, member_id, cat, max_pages_per_cat ) future_tasks[future] = cat["category_name"] # 處理任務(wù)結(jié)果 for future in as_completed(future_tasks): cat_name = future_tasks[future] try: result = future.result() if result and result["products"]: category_results[cat_name] = result all_products.extend(result["products"]) print(f"分類 [{cat_name}] 采集完成,獲取 {len(result['products'])} 個(gè)商品") else: print(f"分類 [{cat_name}] 采集失敗或無(wú)商品") except Exception as e: print(f"分類 [{cat_name}] 采集異常: {str(e)}") # 去重并添加分類信息 unique_products = self.parser.remove_duplicates(all_products) for product in unique_products: # 為每個(gè)商品添加所屬分類 for cat_name, cat_data in category_results.items(): if product in cat_data["products"]: product["category"] = cat_name break return { "total_products": len(unique_products), "category_counts": {k: len(v["products"]) for k, v in category_results.items()}, "products": unique_products }
三、完整店鋪商品采集服務(wù)封裝
整合上述組件,實(shí)現(xiàn)完整的店鋪商品采集服務(wù),支持全量采集與分類采集兩種模式:
class AlibabaShopProductService: """1688店鋪商品完整采集服務(wù)""" def __init__(self, proxy_pool=None): self.shop_parser = AlibabaShopParser() self.params_generator = AlibabaShopProductParamsGenerator() self.requester = AlibabaShopProductRequester(proxy_pool=proxy_pool) self.product_parser = AlibabaShopProductParser() self.category_collector = AlibabaShopCategoryCollector( self.requester, self.product_parser, self.params_generator ) def collect_shop_products(self, shop_url, max_pages=5, by_category=False, max_pages_per_cat=3): """ 采集店鋪所有商品 :param shop_url: 店鋪URL :param max_pages: 最大采集頁(yè)數(shù)(全量采集時(shí)) :param by_category: 是否按分類采集 :param max_pages_per_cat: 每個(gè)分類最大采集頁(yè)數(shù) :return: 包含店鋪信息和商品列表的字典 """ # 1. 獲取店鋪基礎(chǔ)信息 print("獲取店鋪基礎(chǔ)信息...") shop_info = self.shop_parser.get_shop_base_info(shop_url) if not shop_info or not shop_info["member_id"]: print("無(wú)法獲取店鋪信息,采集終止") return None member_id = shop_info["member_id"] print(f"店鋪信息:{shop_info['shop_name']} (ID: {member_id})") # 2. 獲取店鋪分類 print("獲取店鋪商品分類...") categories = self.shop_parser.get_shop_categories(member_id) if categories: print(f"發(fā)現(xiàn) {len(categories)} 個(gè)商品分類:{[c['category_name'] for c in categories]}") else: print("未獲取到店鋪分類信息") by_category = False # 無(wú)法按分類采集 # 3. 采集商品 if by_category and categories: # 按分類采集 print("開(kāi)始按分類采集商品...") product_result = self.category_collector.collect_by_category( member_id=member_id, categories=categories, max_pages_per_cat=max_pages_per_cat ) else: # 全量采集 print("開(kāi)始全量采集商品...") product_result = self._collect_all_products( member_id=member_id, max_pages=max_pages ) if not product_result or not product_result["products"]: print("未采集到任何商品") return None # 4. 整合結(jié)果 return { "shop_info": shop_info, "collection_time": datetime.now().strftime("%Y-%m-%d %H:%M:%S"), "total_products": product_result["total_products"], "category_distribution": product_result.get("category_counts", {}), "products": product_result["products"] }
四、使用示例與數(shù)據(jù)存儲(chǔ)分析
1. 基本使用示例
def main(): # 代理池(實(shí)際使用時(shí)替換為有效代理) proxy_pool = [ # "http://123.123.123.123:8080", # "http://111.111.111.111:8888" ] # 初始化店鋪商品采集服務(wù) service = AlibabaShopProductService(proxy_pool=proxy_pool) # 店鋪URL(替換為實(shí)際店鋪URL) shop_url = "https://shop123456789.1688.com" # 采集店鋪商品(按分類采集,每個(gè)分類最多2頁(yè)) result = service.collect_shop_products( shop_url=shop_url, by_category=True, max_pages_per_cat=2 ) # 處理結(jié)果 if result: print(f"\n采集完成!共獲取 {result['total_products']} 個(gè)商品") # 打印店鋪信息 print(f"\n店鋪名稱:{result['shop_info']['shop_name']}") print(f"主營(yíng)類目:{result['shop_info']['main_category']}") print(f"經(jīng)營(yíng)年限:{result['shop_info']['operation_years']}") print(f"誠(chéng)信等級(jí):{result['shop_info']['credit_level']}")
2. 數(shù)據(jù)存儲(chǔ)與分析工具
import jsonimport csvimport pandas as pdimport matplotlib.pyplot as pltfrom pathlib import Pathfrom datetime import datetime# 設(shè)置中文顯示plt.rcParams["font.family"] = ["SimHei", "WenQuanYi Micro Hei", "Heiti TC"]class ShopProductStorageAnalyzer: """店鋪商品數(shù)據(jù)存儲(chǔ)與分析工具""" def __init__(self, storage_dir="./1688_shop_products"): self.storage_dir = Path(storage_dir) self.storage_dir.mkdir(exist_ok=True, parents=True) def save_results(self, result): """保存采集結(jié)果""" shop_name = result["shop_info"]["shop_name"].replace('/', '_') timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") # 保存完整結(jié)果(JSON) json_path = self.storage_dir / f"{shop_name}_full_{timestamp}.json" with open(json_path, "w", encoding="utf-8") as f: json.dump(result, f, ensure_ascii=False, indent=2, default=str) # 保存商品列表(CSV) csv_path = self.storage_dir / f"{shop_name}_products_{timestamp}.csv" self._save_products_to_csv(result["products"], csv_path) print(f"數(shù)據(jù)已保存至:\n- {json_path}\n- {csv_path}") return json_path, csv_path def analyze_shop_products(self, result): """分析店鋪商品數(shù)據(jù)""" if not result or not result["products"]: return None print("\n開(kāi)始分析店鋪商品數(shù)據(jù)...") products = result["products"] shop_name = result["shop_info"]["shop_name"] # 1. 分類分布分析 self._analyze_category_distribution(products, shop_name) # 2. 價(jià)格分布分析 self._analyze_price_distribution(products, shop_name) # 3. 起訂量分析 self._analyze_min_order(products, shop_name) # 4. 銷量與價(jià)格關(guān)系 self._analyze_sales_vs_price(products, shop_name) return True
五、合規(guī)優(yōu)化與風(fēng)險(xiǎn)提示
1. 系統(tǒng)優(yōu)化策略
- 增量采集機(jī)制:記錄已采集商品 ID,僅采集新增或更新的商品
def incremental_collect(self, shop_url, last_collected_ids): """增量采集:僅獲取新商品""" # 實(shí)現(xiàn)邏輯... return new_products
- 智能緩存策略:緩存店鋪分類信息和已采集商品,減少重復(fù)請(qǐng)求
- 分布式采集:大規(guī)模采集時(shí)采用分布式架構(gòu),分散 IP 壓力
2. 合規(guī)與風(fēng)險(xiǎn)提示
- 商業(yè)應(yīng)用前必須獲得 1688 平臺(tái)和店鋪的書(shū)面授權(quán),遵守《電子商務(wù)法》
- 單店鋪采集頻率不宜過(guò)高,建議間隔 24 小時(shí)以上重復(fù)采集
- 不得將采集的店鋪商品數(shù)據(jù)用于生成與該店鋪競(jìng)爭(zhēng)的產(chǎn)品或服務(wù)
- 尊重店鋪商業(yè)信息,不濫用數(shù)據(jù)進(jìn)行價(jià)格戰(zhàn)或惡意競(jìng)爭(zhēng)
- 當(dāng)檢測(cè)到反爬機(jī)制觸發(fā)時(shí),應(yīng)立即停止采集并間隔 48 小時(shí)以上再試
- 通過(guò)本文提供的技術(shù)方案,可構(gòu)建一套功能完善的 1688 店鋪全商品采集系統(tǒng)。該方案針對(duì) B2B 電商特色進(jìn)行了專項(xiàng)優(yōu)化,支持按分類采集、商品去重、數(shù)據(jù)分布分析等功能,為供應(yīng)鏈分析、競(jìng)品調(diào)研等場(chǎng)景提供堅(jiān)實(shí)技術(shù)支持。在實(shí)際應(yīng)用中,需特別注意平臺(tái)對(duì)店鋪批量采集的嚴(yán)格限制,確保合規(guī)使用。若在技術(shù)實(shí)踐中遇到接口適配、反爬策略優(yōu)化等問(wèn)題,歡迎基于本文方案進(jìn)行技術(shù)探討與經(jīng)驗(yàn)交流。