news 2026/1/10 0:52:18

基于Playwright与异步技术的公司评价智能爬取实战:以Glassdoor为例

作者头像

张小明

前端开发工程师

1.2k 24
文章封面图
基于Playwright与异步技术的公司评价智能爬取实战:以Glassdoor为例

引言:企业评价数据挖掘的价值与挑战

在当今数字化商业环境中,企业在线评价已成为影响投资者决策、人才招聘和品牌声誉的关键因素。Glassdoor、Indeed等职业平台积累了海量员工匿名评价,这些数据对于分析企业文化、薪资水平、工作满意度等具有重要价值。然而,这些平台通常实施了复杂的反爬虫机制,传统的爬虫技术难以应对。

本文将介绍一套基于最新Python技术的智能爬虫解决方案,使用Playwright实现浏览器自动化,结合异步编程和智能解析技术,高效抓取Glassdoor等平台的公司评价数据。我们不仅关注数据获取,更注重数据的结构化处理和反反爬虫策略的实现。

技术栈概览

  • Playwright: 微软开发的浏览器自动化工具,支持无头浏览器操作

  • Asyncio: Python异步IO框架,实现高并发数据抓取

  • BeautifulSoup4: HTML解析库,提取结构化数据

  • Pandas: 数据处理与分析库

  • 代理IP与请求伪装: 应对反爬虫机制

  • 数据存储: SQLite + CSV双格式存储

完整爬虫实现

1. 环境配置与依赖安装

python

# requirements.txt playwright==1.40.0 beautifulsoup4==4.12.2 pandas==2.1.4 aiofiles==23.2.1 aiohttp==3.9.1 nest-asyncio==1.5.8 fake-useragent==1.4.0 python-dotenv==1.0.0 sqlalchemy==2.0.23 # 安装命令 # pip install -r requirements.txt # playwright install chromium

2. 配置文件与常量定义

python

# config.py import os from dataclasses import dataclass from typing import Optional from fake_useragent import UserAgent from dotenv import load_dotenv load_dotenv() @dataclass class ScraperConfig: """爬虫配置类""" # 目标平台配置 GLASSDOOR_BASE_URL = "https://www.glassdoor.com" REVIEWS_PER_PAGE = 10 # 代理配置 PROXY_SERVER: Optional[str] = os.getenv("PROXY_SERVER") PROXY_USERNAME: Optional[str] = os.getenv("PROXY_USERNAME") PROXY_PASSWORD: Optional[str] = os.getenv("PROXY_PASSWORD") # 请求配置 REQUEST_TIMEOUT = 30000 # 毫秒 MAX_RETRIES = 3 DELAY_BETWEEN_REQUESTS = (1, 3) # 随机延迟范围 # 浏览器配置 HEADLESS = True VIEWPORT = {"width": 1920, "height": 1080} USER_AGENT = UserAgent().random # 存储配置 OUTPUT_DIR = "data/reviews" SQLITE_DB = "company_reviews.db" @classmethod def get_proxy_url(cls) -> Optional[str]: """获取代理URL""" if cls.PROXY_SERVER: if cls.PROXY_USERNAME and cls.PROXY_PASSWORD: return f"http://{cls.PROXY_USERNAME}:{cls.PROXY_PASSWORD}@{cls.PROXY_SERVER}" return f"http://{cls.PROXY_SERVER}" return None class Selectors: """CSS选择器定义""" # Glassdoor选择器 REVIEW_CARD = "div.gdReview" REVIEW_RATING = "span.ratingNumber" REVIEW_TITLE = "a.reviewLink" REVIEW_DATE = "time.date" REVIEWER_ROLE = "span.authorJobTitle" REVIEWER_LOCATION = "span.authorLocation" REVIEW_PRO = "span.pros" REVIEW_CON = "span.cons" REVIEW_ADVICE = "span.adviceMgmt" OVERALL_RATING = "div.ratingNum" WORK_LIFE_BALANCE = "div.rating span" CULTURE_VALUES = "div.rating span" CAREER_OPPORTUNITY = "div.rating span" COMPANY_BENEFITS = "div.rating span" SENIOR_MANAGEMENT = "div.rating span" # 分页选择器 NEXT_PAGE = "button.nextButton" PAGINATION = "div.pagination" # 登录相关 LOGIN_MODAL = "div.modal" EMAIL_INPUT = "input#userEmail" PASSWORD_INPUT = "input#userPassword" LOGIN_BUTTON = "button[type='submit']"

3. 核心爬虫类实现

python

# glassdoor_scraper.py import asyncio import json import random import logging from datetime import datetime from typing import Dict, List, Optional, Any from pathlib import Path import pandas as pd from bs4 import BeautifulSoup from playwright.async_api import async_playwright, Page, BrowserContext import aiofiles import aiohttp from sqlalchemy import create_engine, Column, Integer, String, Float, DateTime, Text from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker from config import ScraperConfig, Selectors # 配置日志 logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', handlers=[ logging.FileHandler('scraper.log'), logging.StreamHandler() ] ) logger = logging.getLogger(__name__) # 数据库模型 Base = declarative_base() class CompanyReview(Base): """公司评价数据模型""" __tablename__ = 'company_reviews' id = Column(Integer, primary_key=True) company_name = Column(String(200)) review_id = Column(String(100), unique=True) reviewer_role = Column(String(200)) reviewer_location = Column(String(200)) review_date = Column(DateTime) overall_rating = Column(Float) work_life_balance = Column(Float) culture_values = Column(Float) career_opportunity = Column(Float) company_benefits = Column(Float) senior_management = Column(Float) review_title = Column(String(500)) pros = Column(Text) cons = Column(Text) advice_to_management = Column(Text) employment_status = Column(String(100)) source_platform = Column(String(50)) scraped_at = Column(DateTime, default=datetime.utcnow) url = Column(String(500)) class GlassdoorScraper: """Glassdoor公司评价爬虫""" def __init__(self, config: ScraperConfig = None): self.config = config or ScraperConfig() self._prepare_directories() self._init_database() self.session = None def _prepare_directories(self): """创建输出目录""" Path(self.config.OUTPUT_DIR).mkdir(parents=True, exist_ok=True) def _init_database(self): """初始化数据库""" self.engine = create_engine(f'sqlite:///{self.config.SQLITE_DB}') Base.metadata.create_all(self.engine) Session = sessionmaker(bind=self.engine) self.session = Session() async def random_delay(self): """随机延迟,避免被检测""" delay = random.uniform(*self.config.DELAY_BETWEEN_REQUESTS) await asyncio.sleep(delay) async def create_browser_context(self, playwright) -> BrowserContext: """创建浏览器上下文""" browser_args = [ '--disable-blink-features=AutomationControlled', '--no-sandbox', '--disable-setuid-sandbox', '--disable-dev-shm-usage', f'--user-agent={self.config.USER_AGENT}', ] if self.config.get_proxy_url(): browser_args.append(f'--proxy-server={self.config.get_proxy_url()}') browser = await playwright.chromium.launch( headless=self.config.HEADLESS, args=browser_args ) context = await browser.new_context( viewport=self.config.VIEWPORT, user_agent=self.config.USER_AGENT, ignore_https_errors=True ) # 添加 stealth 脚本避免检测 await context.add_init_script(""" Object.defineProperty(navigator, 'webdriver', { get: () => undefined }); window.chrome = { runtime: {} }; """) return context async def handle_cookies_and_popups(self, page: Page): """处理Cookie弹窗和其他弹窗""" try: # 等待并关闭可能的弹窗 selectors_to_check = [ 'button[aria-label="Close"]', 'button:has-text("Accept")', 'button:has-text("同意")', 'button:has-text("拒绝")', 'div.modal button.close' ] for selector in selectors_to_check: try: await page.wait_for_selector(selector, timeout=5000) await page.click(selector) logger.info(f"已关闭弹窗: {selector}") await self.random_delay() except: continue except Exception as e: logger.debug(f"处理弹窗时出错: {str(e)}") async def extract_review_data(self, page: Page, review_element) -> Optional[Dict]: """提取单条评价数据""" try: # 获取评价ID review_id = await review_element.get_attribute('data-id') or await review_element.get_attribute('id') # 获取整体评分 overall_rating_elem = await review_element.query_selector(Selectors.OVERALL_RATING) overall_rating = await self._extract_rating(overall_rating_elem) # 获取子评分 sub_ratings = await self._extract_sub_ratings(review_element) # 获取评价标题和内容 title_elem = await review_element.query_selector(Selectors.REVIEW_TITLE) review_title = await title_elem.inner_text() if title_elem else "" # 获取评价者信息 role_elem = await review_element.query_selector(Selectors.REVIEWER_ROLE) reviewer_role = await role_elem.inner_text() if role_elem else "" location_elem = await review_element.query_selector(Selectors.REVIEWER_LOCATION) reviewer_location = await location_elem.inner_text() if location_elem else "" # 获取日期 date_elem = await review_element.query_selector(Selectors.REVIEW_DATE) review_date_str = await date_elem.get_attribute('datetime') if date_elem else "" review_date = datetime.fromisoformat(review_date_str.replace('Z', '+00:00')) if review_date_str else None # 获取评价正文 pros_elem = await review_element.query_selector(Selectors.REVIEW_PRO) pros = await pros_elem.inner_text() if pros_elem else "" cons_elem = await review_element.query_selector(Selectors.REVIEW_CON) cons = await cons_elem.inner_text() if cons_elem else "" advice_elem = await review_element.query_selector(Selectors.REVIEW_ADVICE) advice_to_management = await advice_elem.inner_text() if advice_elem else "" # 构建数据字典 review_data = { 'review_id': review_id, 'reviewer_role': reviewer_role.strip(), 'reviewer_location': reviewer_location.strip(), 'review_date': review_date, 'overall_rating': overall_rating, 'work_life_balance': sub_ratings.get('work_life_balance', 0), 'culture_values': sub_ratings.get('culture_values', 0), 'career_opportunity': sub_ratings.get('career_opportunity', 0), 'company_benefits': sub_ratings.get('company_benefits', 0), 'senior_management': sub_ratings.get('senior_management', 0), 'review_title': review_title.strip(), 'pros': pros.strip(), 'cons': cons.strip(), 'advice_to_management': advice_to_management.strip(), 'source_platform': 'Glassdoor', 'url': page.url, 'scraped_at': datetime.utcnow() } return review_data except Exception as e: logger.error(f"提取评价数据时出错: {str(e)}") return None async def _extract_rating(self, element) -> float: """提取评分""" if not element: return 0.0 try: text = await element.inner_text() return float(text.strip()) except: return 0.0 async def _extract_sub_ratings(self, review_element) -> Dict[str, float]: """提取子评分""" sub_ratings = {} rating_categories = [ ('work_life_balance', Selectors.WORK_LIFE_BALANCE), ('culture_values', Selectors.CULTURE_VALUES), ('career_opportunity', Selectors.CAREER_OPPORTUNITY), ('company_benefits', Selectors.COMPANY_BENEFITS), ('senior_management', Selectors.SENIOR_MANAGEMENT) ] for category, selector in rating_categories: elem = await review_element.query_selector(selector) sub_ratings[category] = await self._extract_rating(elem) return sub_ratings async def scrape_company_reviews(self, company_name: str, max_pages: int = 10) -> List[Dict]: """爬取公司评价""" reviews_data = [] all_review_ids = set() async with async_playwright() as p: context = await self.create_browser_context(p) page = await context.new_page() try: # 构建搜索URL search_url = f"{self.config.GLASSDOOR_BASE_URL}/Reviews/{company_name.replace(' ', '-')}-Reviews" logger.info(f"开始爬取: {company_name}") await page.goto(search_url, timeout=self.config.REQUEST_TIMEOUT) await self.handle_cookies_and_popups(page) await self.random_delay() # 获取公司名称 company_name_elem = await page.query_selector('h1') actual_company_name = await company_name_elem.inner_text() if company_name_elem else company_name current_page = 1 while current_page <= max_pages: logger.info(f"正在处理第 {current_page} 页") # 等待评价加载 await page.wait_for_selector(Selectors.REVIEW_CARD, timeout=10000) # 获取所有评价元素 review_elements = await page.query_selector_all(Selectors.REVIEW_CARD) for review_element in review_elements: review_data = await self.extract_review_data(page, review_element) if review_data and review_data['review_id'] not in all_review_ids: review_data['company_name'] = actual_company_name reviews_data.append(review_data) all_review_ids.add(review_data['review_id']) # 保存到数据库 await self.save_to_database(review_data) # 检查是否有下一页 next_button = await page.query_selector(Selectors.NEXT_PAGE) if next_button and await next_button.is_enabled(): await next_button.click() await self.random_delay() current_page += 1 else: logger.info("没有更多页面") break # 每抓取一页保存一次 if reviews_data: await self.save_to_csv(reviews_data, company_name) logger.info(f"完成爬取 {company_name}, 共获取 {len(reviews_data)} 条评价") except Exception as e: logger.error(f"爬取过程中出错: {str(e)}") # 保存已获取的数据 if reviews_data: await self.save_to_csv(reviews_data, company_name) finally: await context.close() return reviews_data async def save_to_database(self, review_data: Dict): """保存数据到数据库""" try: # 检查是否已存在 existing = self.session.query(CompanyReview).filter_by( review_id=review_data['review_id'] ).first() if not existing: review_obj = CompanyReview(**review_data) self.session.add(review_obj) self.session.commit() logger.debug(f"已保存评价到数据库: {review_data['review_id']}") except Exception as e: logger.error(f"保存到数据库时出错: {str(e)}") self.session.rollback() async def save_to_csv(self, reviews_data: List[Dict], company_name: str): """保存数据到CSV文件""" try: if not reviews_data: return # 清理文件名 safe_company_name = "".join(c for c in company_name if c.isalnum() or c in (' ', '-', '_')).rstrip() timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") filename = f"{self.config.OUTPUT_DIR}/{safe_company_name}_{timestamp}.csv" df = pd.DataFrame(reviews_data) df.to_csv(filename, index=False, encoding='utf-8-sig') logger.info(f"数据已保存到CSV: {filename}") # 同时保存为JSON格式 json_filename = filename.replace('.csv', '.json') async with aiofiles.open(json_filename, 'w', encoding='utf-8') as f: await f.write(json.dumps(reviews_data, ensure_ascii=False, indent=2, default=str)) except Exception as e: logger.error(f"保存到CSV时出错: {str(e)}") async def scrape_multiple_companies(self, company_names: List[str], max_pages_per_company: int = 5): """批量爬取多个公司""" tasks = [] for company in company_names: task = asyncio.create_task(self.scrape_company_reviews(company, max_pages_per_company)) tasks.append(task) results = await asyncio.gather(*tasks, return_exceptions=True) return results async def export_data_analysis(self, company_name: str): """导出数据分析报告""" try: # 从数据库查询数据 reviews = self.session.query(CompanyReview).filter_by( company_name=company_name ).all() if not reviews: logger.warning(f"未找到 {company_name} 的数据") return # 转换为DataFrame data = [{ 'review_date': r.review_date, 'overall_rating': r.overall_rating, 'work_life_balance': r.work_life_balance, 'culture_values': r.culture_values, 'career_opportunity': r.career_opportunity, 'company_benefits': r.company_benefits, 'senior_management': r.senior_management, 'review_title': r.review_title, 'pros': r.pros, 'cons': r.cons } for r in reviews] df = pd.DataFrame(data) # 生成分析报告 analysis_report = { 'company_name': company_name, 'total_reviews': len(df), 'average_overall_rating': df['overall_rating'].mean(), 'rating_distribution': df['overall_rating'].value_counts().to_dict(), 'average_sub_ratings': { 'work_life_balance': df['work_life_balance'].mean(), 'culture_values': df['culture_values'].mean(), 'career_opportunity': df['career_opportunity'].mean(), 'company_benefits': df['company_benefits'].mean(), 'senior_management': df['senior_management'].mean() }, 'reviews_per_year': df['review_date'].dt.year.value_counts().to_dict() if not df.empty else {}, 'most_common_pros': self._extract_keywords(df['pros'].str.cat(sep=' ')), 'most_common_cons': self._extract_keywords(df['cons'].str.cat(sep=' ')) } # 保存分析报告 report_filename = f"{self.config.OUTPUT_DIR}/{company_name}_analysis_{datetime.now().strftime('%Y%m%d')}.json" async with aiofiles.open(report_filename, 'w', encoding='utf-8') as f: await f.write(json.dumps(analysis_report, ensure_ascii=False, indent=2)) logger.info(f"分析报告已保存: {report_filename}") return analysis_report except Exception as e: logger.error(f"生成分析报告时出错: {str(e)}") def _extract_keywords(self, text: str, top_n: int = 10) -> List[str]: """提取关键词(简化版)""" from collections import Counter import re words = re.findall(r'\b\w+\b', text.lower()) stopwords = {'the', 'and', 'for', 'with', 'this', 'that', 'are', 'was', 'were', 'has', 'have', 'had'} filtered_words = [word for word in words if word not in stopwords and len(word) > 2] return [word for word, count in Counter(filtered_words).most_common(top_n)]

4. 主程序与使用示例

python

# main.py import asyncio import sys from typing import List import nest_asyncio from glassdoor_scraper import GlassdoorScraper, ScraperConfig # 允许嵌套事件循环 nest_asyncio.apply() async def main(): """主函数""" # 配置爬虫 config = ScraperConfig() config.HEADLESS = False # 调试时可设置为False config.MAX_RETRIES = 5 # 创建爬虫实例 scraper = GlassdoorScraper(config) # 目标公司列表 companies = [ "Google", "Microsoft", "Amazon", # 添加更多公司... ] try: # 批量爬取 print("开始爬取公司评价数据...") results = await scraper.scrape_multiple_companies( companies, max_pages_per_company=3 # 每家公司爬取3页 ) # 生成分析报告 for company in companies: await scraper.export_data_analysis(company) print("爬取完成!数据已保存到 data/reviews/ 目录") except KeyboardInterrupt: print("\n用户中断爬取过程") except Exception as e: print(f"爬取过程中发生错误: {str(e)}") import traceback traceback.print_exc() finally: if scraper.session: scraper.session.close() def run_scraper_sync(): """同步运行爬虫""" asyncio.run(main()) if __name__ == "__main__": # 命令行参数支持 if len(sys.argv) > 1: companies = sys.argv[1:] asyncio.run(main_with_args(companies)) else: run_scraper_sync() async def main_with_args(company_list: List[str]): """使用命令行参数运行""" config = ScraperConfig() scraper = GlassdoorScraper(config) try: await scraper.scrape_multiple_companies(company_list, max_pages_per_company=3) for company in company_list: await scraper.export_data_analysis(company) finally: if scraper.session: scraper.session.close()

5. 高级功能扩展

python

# advanced_features.py class AdvancedReviewScraper(GlassdoorScraper): """增强版爬虫,支持更多平台""" async def scrape_indeed_reviews(self, company_name: str): """爬取Indeed评价""" # Indeed爬虫实现 pass async def scrape_ambitionbox_reviews(self, company_name: str): """爬取AmbitionBox评价""" # AmbitionBox爬虫实现 pass async def analyze_sentiment(self, text: str): """情感分析""" # 可集成 transformers 库进行情感分析 pass async def scrape_with_retry(self, url: str, max_retries: int = 3): """带重试机制的爬取""" for attempt in range(max_retries): try: return await self.scrape_page(url) except Exception as e: logger.warning(f"尝试 {attempt + 1} 失败: {str(e)}") if attempt < max_retries - 1: await asyncio.sleep(2 ** attempt) # 指数退避 raise Exception(f"爬取失败: {url}") class ReviewDataAnalyzer: """数据分析器""" @staticmethod def generate_visualizations(df: pd.DataFrame, company_name: str): """生成可视化图表""" import matplotlib.pyplot as plt import seaborn as sns # 评分趋势图 plt.figure(figsize=(12, 6)) df['review_date'] = pd.to_datetime(df['review_date']) monthly_avg = df.resample('M', on='review_date')['overall_rating'].mean() monthly_avg.plot(title=f'{company_name} - Monthly Average Rating') plt.savefig(f'data/reviews/{company_name}_rating_trend.png') # 评分分布图 plt.figure(figsize=(10, 6)) sns.histplot(df['overall_rating'], bins=10, kde=True) plt.title(f'{company_name} - Rating Distribution') plt.savefig(f'data/reviews/{company_name}_rating_distribution.png')

6. Docker部署配置

dockerfile

# Dockerfile FROM python:3.11-slim WORKDIR /app # 安装系统依赖 RUN apt-get update && apt-get install -y \ wget \ gnupg \ unzip \ && wget -q -O - https://dl.google.com/linux/linux_signing_key.pub | apt-key add - \ && echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list \ && apt-get update && apt-get install -y google-chrome-stable \ && apt-get clean # 复制项目文件 COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt # 安装Playwright浏览器 RUN playwright install chromium COPY . . # 运行脚本 CMD ["python", "main.py"]

7. 使用说明与最佳实践

python

# usage_examples.py """ Glassdoor爬虫使用示例 """ # 示例1:基本使用 async def example_basic(): scraper = GlassdoorScraper() reviews = await scraper.scrape_company_reviews("Google", max_pages=2) print(f"获取了 {len(reviews)} 条评价") # 示例2:批量爬取 async def example_batch(): companies = ["Google", "Microsoft", "Apple", "Meta"] scraper = GlassdoorScraper() results = await scraper.scrape_multiple_companies(companies, max_pages_per_company=2) for company, reviews in zip(companies, results): if isinstance(reviews, list): print(f"{company}: {len(reviews)} 条评价") # 示例3:使用代理 async def example_with_proxy(): import os os.environ['PROXY_SERVER'] = 'your-proxy-server:port' os.environ['PROXY_USERNAME'] = 'username' os.environ['PROXY_PASSWORD'] = 'password' config = ScraperConfig() scraper = GlassdoorScraper(config) await scraper.scrape_company_reviews("Amazon") # 示例4:自定义配置 async def example_custom_config(): config = ScraperConfig() config.HEADLESS = False # 显示浏览器 config.REQUEST_TIMEOUT = 60000 # 60秒超时 config.DELAY_BETWEEN_REQUESTS = (2, 5) # 更长延迟 scraper = GlassdoorScraper(config) await scraper.scrape_company_reviews("Netflix")

反爬虫策略与伦理考量

技术对抗策略

  1. 请求频率控制:随机延迟、请求间隔控制

  2. IP轮换:使用代理IP池

  3. 浏览器指纹伪装:动态User-Agent、Canvas指纹修改

  4. 行为模拟:鼠标移动、滚动等人类行为模拟

  5. 验证码处理:集成第三方验证码识别服务

伦理与法律注意事项

  1. 遵守robots.txt:检查目标网站的爬虫政策

  2. 数据使用限制:仅用于个人研究分析

  3. 版权尊重:不随意传播原始数据

  4. 隐私保护:不收集个人身份信息

  5. 访问频率限制:避免对目标网站造成负担

性能优化建议

  1. 异步并发控制:使用信号量限制并发数

  2. 连接池复用:重用HTTP连接

  3. 缓存策略:实现数据缓存减少重复请求

  4. 增量爬取:只爬取新数据

  5. 分布式部署:使用Redis队列实现分布式爬虫

数据应用场景

  1. 企业竞争力分析:比较不同公司的员工满意度

  2. 投资决策支持:评估企业文化健康状况

  3. 人才招聘参考:了解目标公司的真实工作环境

  4. 市场研究:分析行业趋势和员工关注点

  5. 学术研究:用于组织行为学、人力资源管理研究

结论

本文实现了一个功能完整、技术先进的Glassdoor公司评价爬虫系统。通过Playwright的浏览器自动化能力,结合异步编程和智能解析技术,能够有效应对现代网站的反爬虫措施。系统不仅关注数据采集,还提供了数据存储、分析和可视化的完整解决方案。

版权声明: 本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若内容造成侵权/违法违规/事实不符,请联系邮箱:809451989@qq.com进行投诉反馈,一经查实,立即删除!
网站建设 2026/1/9 13:10:03

HeyGem是否支持并发任务?系统队列机制深度解析

HeyGem是否支持并发任务&#xff1f;系统队列机制深度解析 在AI数字人内容创作日益普及的今天&#xff0c;越来越多的企业和个人开始尝试批量生成口型同步视频。无论是制作系列课程、产品宣传&#xff0c;还是打造虚拟主播内容矩阵&#xff0c;用户都希望系统能“一口气处理多个…

作者头像 李华
网站建设 2026/1/9 3:08:04

ASG三权模式下各管理员的职责是什么

本文档提供了ASG系列产品的维护指导。 文章目录ASG三权模式下各管理员的职责是什么三权模式可以切换到普通模式吗三个默认管理员账号是否可编辑普通模式切换到三权模式后&#xff0c;原来的系统管理员、审计员账号还可以登录吗三权模式下&#xff0c;新建的管理员下可以再创建管…

作者头像 李华
网站建设 2026/1/7 8:42:09

为什么推荐使用批量处理模式?效率提升三倍以上

为什么推荐使用批量处理模式&#xff1f;效率提升三倍以上 在企业级数字内容生产日益自动化的今天&#xff0c;一个看似简单的视频生成流程&#xff0c;往往隐藏着巨大的效率瓶颈。比如&#xff0c;一家教育公司需要为同一段课程音频&#xff0c;生成由不同“数字人”形象讲解的…

作者头像 李华
网站建设 2026/1/9 3:07:31

使用IE浏览器https无法访问设备Web界面

本文档提供了ASG系列产品的维护指导。 文章目录使用IE浏览器https无法访问设备Web界面使用IE浏览器https无法访问设备Web界面 IE浏览器因对证书安全检验级别较高&#xff0c;公司私有证书网站浏览器会禁止用户继续访问&#xff0c;导致无法通过https访问设备。 推荐使用火狐、…

作者头像 李华
网站建设 2026/1/9 3:17:01

金洲慈航珠宝消费:HeyGem制作婚庆饰品定制服务介绍

金洲慈航珠宝消费&#xff1a;HeyGem制作婚庆饰品定制服务介绍 在婚礼筹备的无数细节中&#xff0c;一件定制婚戒、一条刻名项链&#xff0c;早已不只是饰品——它们承载的是两个人独一无二的情感印记。而当这份情感需要被“讲述”时&#xff0c;传统的图文卡片或千篇一律的祝福…

作者头像 李华
网站建设 2026/1/8 5:02:41

删除选中视频无效?刷新页面解决临时UI bug

删除选中视频无效&#xff1f;刷新页面解决临时UI bug 在使用数字人视频生成系统时&#xff0c;你是否遇到过这样的情况&#xff1a;点击“删除选中视频”按钮后&#xff0c;界面上的文件却纹丝不动&#xff1f;没有报错提示&#xff0c;操作也看似执行成功了&#xff0c;但那…

作者头像 李华