A recent open-source project named "Colleague.skill" has gained significant popularity on the technical community platform GitHub. Its function involves feeding an AI with "raw materials" such as a departing colleague's Feishu messages, DingTalk documents, emails, and screenshots. By adding a few subjective descriptions of their personality, the system can generate an AI avatar capable of performing their work tasks, a process users have jokingly referred to as "digital distillation."
The notion that after being laid off, an employee must leave behind not only the skills and operational logic acquired during their tenure but also their personal language style and interpersonal methods, effectively forcing them to work for the company perpetually through an AI, has sparked widespread discussion on social media platforms.
This raises critical questions: Where does the data for training these AI avatars come from, and to what extent can it be used? How can intellectual property rights be strengthened and ethical boundaries of technology clarified to ensure that the benefits of AI development reach a broader range of workers?
The concept of "cyber immortality" for former employees has recently become a hot topic. This technology can extract tacit knowledge such as a person's work experience, communication style, and decision-making logic, enabling the AI avatar to possess a degree of human-like thinking ability and complete specific job functions.
Many interviewees express concern that "distilling" a colleague not only infringes upon workers' intellectual property rights but also means that once various data points are collected, the AI may cease to be a reliable assistant and could instead be used as a tool to potentially "optimize" or replace the original worker. In an attempt to sabotage their own digital replicas, some users have even tried "poisoning" the data, suggesting that by writing intentionally poor code, the AI would learn only flawed patterns, resulting in useless outputs for the company.
Conversely, other users have developed "Anti-Colleague.skill" tools. Unlike cloning others, these tools aim to protect one's core knowledge by replacing genuinely important information with superficially correct but substantively empty statements, making the knowledge difficult to replicate accurately.
Experts point out that while "Colleague.skill" appears to be a tool for rapidly boosting productivity, it harbors multiple hidden risks.
Firstly, there are issues concerning the permissible use of worker privacy and intellectual property. Professor Sun Zaifu from Qingdao Huanghai University notes that it remains a legally grey area whether the experience and habits an individual accumulates in the workplace belong more to the company as assets or to the worker as personal intellectual property. Cloning an employee's personal experiences, external learning outcomes, non-job-related inventions, or individual methodologies could constitute an infringement of their intellectual achievements. Furthermore, if AI-generated content leads to infringement, errors, or data leaks, existing laws struggle to clearly assign liability, potentially creating a legal vacuum where accountability is diffuse.
Secondly, the iterative development of such skills fuels anxiety about job displacement. Associate Professor Liu Xi from Shandong Normal University warns that if this technology develops chaotically, it could lead directly to significant reductions in human resource needs for certain roles, triggering a chain of issues including reassignments, pay cuts, and layoffs.
Thirdly, the pace of talent development may struggle to keep up with technological advances. As AI continues to evolve, skill renewal cycles could shorten from a decade to just two or three years, making lifelong reliance on a single skill set obsolete. Concurrently, vocational education models may shift from long-term degree programs to shorter, modular, micro-credential-based learning, requiring individuals to transition from one-time education to lifelong learning, continuous updating, and dynamic skill adaptation.
Many interviewees express confusion about how ordinary workers can share in the benefits of the AI era. While it's undeniable that AI development will replace some basic jobs, it will also simultaneously create new roles such as AI trainers, prompt engineers, digital workforce managers, and algorithm compliance auditors. However, the number of these new positions is currently limited and may not fully absorb the displaced workforce. The "race" between the speed of technological advancement and the emergence of new jobs and social safety nets will require a prolonged period of adjustment.
On one hand, cultivating competencies that AI cannot replace and learning to leverage AI for problem-solving are crucial for ensuring that AI acts as an enhancement to human capabilities. Liu Xi emphasizes that as AI gradually takes over the "how" of execution, the decision-making value of "what" to do and "why" becomes more prominent. Universities should promptly adjust their programs and curricula, strengthening education in humanities, social sciences, and ethics to enhance students' abilities in problem definition, value judgment, and comprehensive aesthetic appreciation.
Sun Zaifu suggests that companies can adopt a "human-machine coupling" approach, utilizing AI Skills for standardized, repetitive tasks to free up human employees from mundane work. This allows them to focus on high-value strategic decision-making, emotional communication, and complex problem-solving.
As the "algorithmization of capability" becomes an unavoidable trend, workers might consider encapsulating their professional skills, work experience, and decision-making logic into reusable AI Skills. By offering these via subscription, licensing, or usage-based revenue sharing models, individuals could provide remote services to multiple enterprises, enabling flexible employment patterns where one person serves several companies and maximizes their labor value.
In the era of human-machine collaboration, how can the primacy and irreplaceability of humans be safeguarded? Experts argue that current efforts should focus on legal regulations, ethical reviews, and industry self-discipline to protect workers' legitimate rights and interests, laying a solid foundation for synergistic development.
Specific recommendations include closing legal gaps to establish firm constraints. Sun Zaifu advises clarifying under the framework of personal information protection laws that employees' work behavior patterns, communication styles, judgment preferences, and thinking logic constitute personal information, potentially even sensitive personal information. Companies must obtain separate, written consent for using such data in AI training. Upon termination of employment, companies should be required to delete personal trace data used for AI training within a specified period.
Strengthening ethical reviews is also crucial to reduce the cost of rights protection for workers. Sun Zaifu recommends that systems like "Colleague.skill" intended for enterprise use should undergo filing and ethical assessment with cyberspace, human resources, and market regulation authorities before deployment, focusing on the legality of data authorization, the necessity of collection scope, and the protection of labor rights.
Finally, improving industry self-discipline can provide a complementary soft constraint. Liu Xi suggests that industry associations could take the lead, collaborating with legal institutions and labor protection organizations to develop self-regulatory conventions. These would promote industry自律 (self-discipline), discourage algorithmic competition that devalues human professions or exacerbates employment imbalances, and ensure that technological progress remains aligned with civilized and lawful principles.
Comments