China Legal Science 2025年第1期 | 算法个性化定价的经济效果辨明与法律规制研究

学术   2025-01-16 11:51   北京  

IDENTIFICATION OF ECONOMIC EFFECTS AND LEGAL REGULATION OF ALGORITHMIC PERSONALIZED PRICING IN CHINA

Xie Yizhang


TABLE OF CONTENTS


I. INTRODUCTION 

II. SCENARIO-BASED ANALYSIS OF ALGORITHMIC PERSONALIZED PRICING 

A. Algorithmic Personalized Pricing Has Limited Detrimental Effects on Consumer Welfare

B. Algorithmic Personalized Pricing Has Promoting Effects on Operator Welfare and Social Welfare

III. REFINING LEGAL APPROACHES TO ALGORITHMIC PERSONALIZED PRICING 

A. The Effect of Antitrust Law on the Regulation of Algorithmic Personalized Pricing Is Limited

B. The Key to Regulate Algorithmic Personalized Pricing Is to Ensure Compliance in Its Implementation Process

IV. REGULATORY SUGGESTIONS FOR ALGORITHMIC PERSONALIZED PRICING

A. Introduction of the Transparency Principle to Address Information Asymmetry Between Operators and Consumers

B. Activation of the Algorithmic Impact Assessment Mechanism to Ensure Compliance

C. Implementation of the Opt-In and Opt-Out Models to Safeguard Consumer Rights

V. CONCLUSION 


Algorithmic personalized pricing has entered the Chinese public field under the name of ‘big data-enabled price discrimination’. In terms of economic effects, algorithmic personalized pricing can not only promote competition but also improve social and consumer welfare in many cases. The damage and illegality of algorithmic personalized pricing does not lie in its own impact, but in the damage to the legitimate rights of consumers caused by its use process and method. The regulatory concept of algorithmic personalized pricing in China should shift from focusing on the substantive legitimacy of the results to focusing on the process of algorithmic personalized pricing. The substantive justice of the economic effect of algorithmic personalized pricing is guaranteed by supervising the procedural justice of the algorithmic personalized pricing. In the compliance process of algorithmic personalized pricing, transparency, the legitimacy evaluation of algorithmic procedures, and the protection of consumer rights play key roles in procedural justice.


I. INTRODUCTION


In the digital economy era, consumer personal data is a core resource for competition in the digital economy market. As one of the most important technological drivers in the digital economy, the algorithm has become an accelerator that promotes commercial innovation. As a combination of platforms, algorithms, and data, algorithmic personalized pricing is increasingly appearing in the business strategies of platform operators and has become an inevitable trend in the future development of the digital economy. Therefore, compliance supervision for algorithmic personalized pricing has become a popular topic worldwide.


In 2018, algorithmic personalized pricing was firstly reported as ‘using big-data analysis to swindle existing customers’ in China, and then has quickly triggered widespread concern. Most media and consumers believe that the business strategy of operators implementing algorithmic personalized pricing results in unfairness. This kind of evaluative inference seems to affect the objective examination of algorithmic personalized pricing. Some scholars believe that algorithmic personalized pricing leads to price fraud, resulting in a trust crisis between consumers and platform operators. More scholars focus on the damage of algorithmic personalized pricing to market competition and social welfare, arguing that the exploitation of consumers by algorithmic personalized pricing is potential and certain, and operators with dominant market positions will use algorithmic personalized pricing to undermine the competition mechanism. In recent years, China’s regulatory authorities have successively issued policies and regulations to regulate operators’ implementation of algorithmic personalized pricing behavior and urge enterprises to rectify and reform their business strategy.


However, the current mainstream view of legal research on algorithmic personalized pricing is inconsistent with that of economic research. First, from the perspective of China’s legal framework, operators have the right to determine the pricing of goods or services and their marketing strategies. In this regard, the legal intervention in the implementation of the algorithmic personalized pricing strategy must be based on specific damages, such as those leading to a reduction in social welfare or causing harm to consumer rights. Evaluative inferences alone are not sufficient to justify legal intervention. Second, from an economic perspective, algorithmic personalized pricing is not a new phenomenon. It is a form of price discrimination and has complex economic effects. Sometimes, algorithmic personalized pricing may benefit both buyers and sellers or harm both parties. From the overall effect, economic research believes that algorithmic personalized pricing is conducive to market competition and the improvement of overall social welfare.


There is a difference between the existing legal and economic research views. The literatures mainly focus on the exploration of the moral hazards of algorithmic personalized pricing in order to respond to the real demand of consumers to stop ‘using big-data analysis to swindle existing customers’. In China’s current legal research, there is little objective reasoning and attention on the economic effects of algorithmic personalized pricing, while economic analysis focuses more on the specific damage scenarios, damage methods, and damage effects of algorithmic personalized pricing. In view of the differences in the understanding of algorithmic personalized pricing between legal research and economic views, this study analyzes the objective effects of algorithmic personalized pricing on social welfare, consumer welfare, and market competition, identifies the key issues in the process of algorithmic personalized pricing implemented by operators, and then charts a path toward a more workable and sensible set of rules for algorithmic personalized pricing.


II. SCENARIO-BASED ANALYSIS OF ALGORITHMIC PERSONALIZED PRICING

Algorithmic personalized pricing is a type of business practice in which commercial platforms use consumer information that is observed, inferred, or collected about personal characteristics and behavioral traits through specific algorithmic technology to set different prices for the same goods to different consumers so that the price charged to each consumer reflects his or her willingness to pay. In economics, algorithmic personalized pricing behavior is neutral and has both positive and negative impacts on the society as a whole. The philosophy behind the implementation of algorithmic personalized pricing strategies by operators is to set prices based on consumers’ willingness to pay rather than cost. Although this pricing concept is different from the cost-based pricing strategy generally accepted by society today and has different impacts on social welfare, consumer welfare, and other economic effects, this does not mean that a personalized pricing strategy based on consumers’ willingness to pay is necessarily negative. The economic nature and different impacts of algorithmic personalized pricing are fundamental issues that need to be clarified. Specifically, if the operator’s implementation of algorithmic personalized pricing can bring positive benefits to consumers, operators and market competition, it has its own legitimacy. Only when the implementation of algorithmic personalized pricing is harmful to consumer welfare, operator welfare and social welfare, it should be prohibited.


A. Algorithmic Personalized Pricing Has Limited Detrimental Effects on Consumer Welfare

Compared to a single pricing strategy, algorithmic personalized pricing is mainly divided into two pricing scenarios with respect to consumer groups. One is an above-current single-market pricing scenario for high-value consumers and the other is a below-market single-market pricing scenario for low-value consumers. Thus, algorithmic personalized pricing can have two effects on consumer welfare. First, it favors low-value consumers by increasing their purchasing choices for that product. In general, the single-pricing strategy excludes this group of low-value consumers because their income cannot afford the corresponding goods. However, algorithmic personalized pricing reduces the price of the corresponding items according to their willingness to pay. Then, low-value consumers return to the target customer of the corresponding item. Second, it is disadvantageous to high-value consumers. When high-value consumers purchase a product with a personalized pricing strategy, their consumer surplus decreases, and this decrease is transferred to the operator surplus. Thus, the effect of algorithmic personalized pricing on overall consumer welfare lies in the trade-off between the loss of benefits for high-value consumers and the additional purchasing options for low-value consumers. Accordingly, whether the law has the necessity and legitimacy for intervention is precisely based on the sum of the interests of the two consumer groups.

For low-value consumers, the operator’s implementation of algorithmic personalized pricing has a deterministic effect on the additional purchasing options. In a single pricing strategy, low-value consumers are excluded from the range of goods or services, and algorithmic personalized pricing sets prices based on consumer preferences. This enables the consumer group, which would otherwise be excluded from such goods, to be included in the range of goods to be purchased. The algorithmic personalized pricing strategy increases the choice of such consumer groups and the possibility of realizing potential transactions in the market, thereby improving the efficiency of social resource allocation and total social welfare. In conclusion, for low-value consumers, the implementation of algorithmic personalized pricing does not harm their rights and interests in any way but increases the corresponding welfare of choice.


As a result, the crux of implementing algorithmic personalized pricing strategies lies in the issue of consumer welfare for high-value consumer groups. Specifically, whether the implementation of an algorithmic personalized pricing strategy damages the interests of such consumer groups and its degree of damage become the key to judging the welfare impairment of high-value consumers.


From an objective perspective, only when people from the high-value consumer group buy a product priced higher than the normal price in a single market, its consumer surplus will decrease, and the reduction will be transferred to an increase in the operator surplus. It appears that algorithmic personalized pricing decreases consumer welfare in this group. As the Austrian School of Economics representative Mises said, economics can be a precise measurement of the objective material world; however, the ‘real price’ from market transactions is more complex. Economic efficiency measures outcomes, but the variables involved in achieving those outcomes come from reasoning about human behavior in practice. The subjective initiative and objective rationality of such consumers cannot be ignored when discussing the implementation of the strategy of algorithmic personalized pricing for high-value consumers with prices higher than the single-market strategy.


As far as consumer initiative is concerned, although consumers tend to have inferior access to information compared to commercial platforms, the popularity of Internet usage has also made it easier for consumers to obtain comprehensive information about the price of goods, which, to a certain extent, hinders the realization of operators’ use of algorithmic personalized pricing strategies to loot the consumer surplus. In reality, consumers do not respond passively to price changes. In the current Internet environment, consumers have a variety of tools to help them find better prices, such as search engines and price-comparison websites. Consumers can compare the prices of the same product and find the lowest price with a click on the mouse. The implementation of algorithmic personalized pricing by operators cannot fully capture the surplus of such consumer groups when consumers are aware of operators’ implementation of algorithmic personalized pricing strategies. On the contrary, existing research shows that when consumers are aware of the operator’s personalized pricing strategy and find it unfair, they will not only refuse to buy the operator’s products but will also take appropriate measures to resist or strongly condemn the operator’s price discrimination. Theoretically, the implementation of individualized pricing will reduce consumer surplus only if the operator has a complete monopoly in a single market, because consumers cannot switch to other operators in a complete monopoly market. Even so, as the Internet and e-commerce develop, an operator with a complete monopoly cannot prevent the formation of a consumer arbitrage mechanism, which means the operator is unable to prevent low-value consumers from reselling the product to high-value consumers. Thus, with the development of algorithmic personalized pricing techniques and the awakening of consumer awareness, it will become increasingly difficult for operators to obtain profits from high-value consumers through personalized pricing.

Leibbrandt’s experiment suggests that operators will spontaneously refrain from adopting algorithmic personalized pricing strategies when they know that consumers are aware of being charged a different price. However, operators are more motivated to engage in algorithmic personalized pricing strategies when they know that consumers are unaware that price discrimination is taking place. Evidence from behavioral experiments shows that the main problem with algorithmic personalized pricing in practice arises from the fact that it is difficult for consumers to identify that an operator has engaged in algorithmic personalized pricing behavior, or that operators deliberately take steps to make it impossible for consumers to know that they are engaging in algorithmic personalized pricing. Based on business practice, if operators deliberately conceal the fact that they are implementing algorithmic personalized pricing, consumers often mistakenly believe that the price of the goods they face is uniform in the market. In this scenario, an incorrect understanding of the pricing strategy will lead to the wrong purchase intention of consumers, which not only leads to a reduction in the welfare of consumers but also damages the right of consumers to know. Furthermore, when consumers are unaware of the operator’s implementation of algorithmic personalized pricing, their right to choose based on misinformation is compromised. Therefore, when the operator is not transparent about the implementation of algorithmic personalized pricing or when the operator intentionally conceals its algorithmic personalized pricing strategy, the damage to the consumer’s interests brings about the need for legal intervention, in which case the algorithmic personalized pricing is not fair and justified, as it jeopardizes the consumer’s right to be informed and the right to choose.


B. Algorithmic Personalized Pricing Has Promoting Effects on Operator Welfare and Social Welfare

From the perspective of economic analysis, as long as the cost price is exceeded, algorithmic personalized pricing will have more advantages than the single pricing strategy in increasing total social welfare and operator welfare. This is because algorithmic personalized pricing is calculated based on consumption willingness rather than cost, and the adoption of algorithmic personalized pricing facilitates the completion of more potential transactions and maximizes total social welfare.


First, when the maximum price of the personalized pricing algorithm exceeds the price of the single pricing of the commercial platform, total social welfare increases, operator surplus increases, and consumer surplus decreases. Among them, the group affected by the reduction in consumer surplus is the type of consumer whose willingness to pay exceeds the single-price level. As mentioned above, for consumers whose willingness to pay is equal to or lower than the single price, when the maximum price of algorithmic personalized pricing is higher than the single price of the market, the two pricing strategies have no impact on low-value consumers. This is because, although consumers with low reserve prices will purchase products at the corresponding price under the algorithmic personalized pricing strategy, the consumer surplus obtained is essentially zero, similar to the consumer surplus under single pricing. Therefore, in this case, the reduction in the surplus of high-value consumers is transferred to the operator surplus. With the continuous increase in the maximum price for high-value consumers in the personalized pricing strategy, the surplus of high-value consumer groups continues to decrease and the surplus of operators continues to increase, but the increment in total social welfare remains unchanged for the part where the maximum price is higher than that in the single market. Therefore, compared with single pricing, when operators choose personalized pricing, operator welfare and total social welfare increase, and the increased part is the incremental part obtained after the maximum price in the algorithmic personalized pricing strategy minus the single-market price.


Second, when the maximum price of algorithmic personalized pricing is lower than that of single pricing but exceeds the cost price, algorithmic personalized pricing not only increases the consumer surplus of the consumer group with a high reserve price but also increases the purchase choice of the consumer group with a low reserve price. Because operators still sell products at a price higher than the cost, although they choose a personalized strategy in which the overall price is lower than a single price, the benefits to operators always increase. At this time, compared to the single pricing strategy, the total social welfare and consumer welfare of the algorithmic personalized pricing strategy increase.


Third, the total social welfare may be reduced only when the algorithmic personalized pricing minimum is below the cost price. This is because when the operator sells a product below cost, the operator’s welfare no longer rises but rather decreases. In this scenario, although consumer welfare increases, the reduction in operator welfare may exceed the increase in consumer welfare, resulting in a reduction in the total social welfare. However, due to the profit-driven nature of operators, they generally do not sell goods below the cost. Even if it is sold below the cost price, it will only be a temporary equity scheme, and the ultimate goal is to quickly occupy the market through low sales and then achieve higher profits through market forces.


As a result, compared to single-market pricing, the implementation of algorithmic personalized pricing by operators does not lead to a decrease in the total welfare of society, consumer welfare, or operator welfare, but in most cases, it has a positive effect. From a consequentialist perspective, the implementation of algorithmic personalized pricing strategies by platforms should not be morally perceived as ‘evil’ by consumers; it actually has substantial economic benefits.


III. REFINING LEGAL APPROACHES TO ALGORITHMIC PERSONALIZED PRICING


Economic analysis gives jurisprudential analysis an important blueprint of data that can concretely show the substantial economic effects of algorithmic personalized pricing and clarify some of the erroneous myths brought about by subjective perceptions. Based on the above scenario-based analysis, the conclusion of legal research that algorithmic personalized pricing will damage consumer welfare and social welfare cannot be verified. Thus, we should rethink the regulatory path for algorithmic personalized.


A. The Effect of Antitrust Law on the Regulation of Algorithmic Personalized Pricing Is Limited

Based on the rise of literature research on big data and competition law in China, many scholars tend to incorporate algorithmic personalized pricing into the regulation of antitrust law, arguing that algorithmic personalized pricing, as a new type of commercial practice in the digital economy, will inevitably have an impact on consumer welfare, social welfare, and the order of competition. After the Anti-Monopoly Commission of the State Council officially published the Anti-Monopoly Guidelines in the Field of Platform Economy of the Anti-Monopoly Commission of the State Council, the academic community seems to have further identified the mainstream approach of regulating algorithmic personalized pricing through antitrust law. However, the regulation of algorithmic personalized pricing by antitrust law is a type of ex post facto regulation, which requires substantial damage to the economic effect as the premise of legal intervention. In other words, the validity of using antitrust law as the primary regulatory basis is based on substantial harm to the outcome. Although there is a dispute between the consumer welfare standard and the total social welfare standard in antitrust law research, under the framework of antitrust analysis, consumer welfare, social welfare, and the effect of competition in the market have always been the constituent elements in judging whether a platform operator has committed an antitrust act.


The aforementioned scenario-based analysis shows that algorithmic personalized pricing positively contributes to consumer welfare and total social welfare in most cases, implying that although both marketing and social psychology studies have concluded that algorithmic personalized pricing induces a strong sense of unfairness among consumers, subjective perceptions cannot be equated with objective existence. Without objective proof of consumer welfare and social welfare damage, antitrust law does not have a legitimate reason to regulate operators’ implementation of an algorithmic personalized pricing strategy.


In terms of competitive effects in the market, from the observation of the appearance of algorithmic personalized pricing, the competitive harm it may cause is attributed to the differential treatment behavior of abusing the dominant market position prohibited by article 17(6) of the Anti-Monopoly  Law of China. The application of article 17(6) of the Anti-Monopoly Law requires that an operator with a market position puts other competitors in the same market at a competitive disadvantage through the implementation of algorithmic personalized pricing. However, empirical research in economics shows that through the reconfiguration of information property rights, algorithmic personalized pricing can promote full competition among oligopolies and achieve unity of fairness and efficiency. In fact, when the algorithmic personalized pricing implemented by an operator with a dominant market position is higher than the single market pricing, consumer demand will shift to the competitor’s alternative products; as a result, the algorithmic personalized pricing behavior not only does not impede competition but also weakens the operator’s market power. Based on the nature of profit maximization, operators implementing algorithmic personalized pricing may focus on providing preferential treatment to the low-value consumer group to reach a Pareto-optimality transaction. In this case, compared with the single pricing strategy, the low-value consumer group has the right to buy, and the overall welfare of consumers has greatly improved, while the welfare of operators and total social welfare have also improved, which is a benign manifestation of market competition.


As for the situation in which algorithmic personalized pricing damages market competition, a dominant operator may implement personalized pricing behavior in the form of below-total-cost pricing in a short period. In fact, this kind of behavior is the predatory pricing behavior of operators under the guise of an algorithmic personalized pricing strategy, in order to exclude competitors from the market and achieve the consequence of harming competition in the peer market. In the era of the platform economy, platform operators with dominant market positions can easily obtain detailed consumer data to help them more accurately set prices significantly below the cost in a particular market or consumer segment through algorithmic personalized pricing to quickly seize the market. Based on the profit-seeking nature of businessmen, it is obvious that the operator’s pursuit of losses is not sustainable. After driving out competitors, operators recover the cost of plundering from consumers and gain monopoly profits by increasing prices. In this case, algorithmic personalized pricing is just a cloak for operators to implement predatory pricing, and the essence of operators’ use of market dominance to exclude and restrict competition is still the anti-competitive effect brought by predatory pricing, rather than the emergence of the new business model of algorithmic personalized pricing. In terms of the ease of acquiring a dominant market position, if the operator intends to implement predatory pricing, it does not need to implement a complex algorithmic personalized pricing strategy as a cloak, and it is quicker to attract consumers and capture the market by directly lowering prices. Therefore, implementing predatory pricing situations under the guise of algorithmic personalized pricing strategies is not a common strategy for operators.


Incorporating algorithmic personalized pricing into the analytical framework of antitrust behavior reveals that the impact on social welfare, overall consumer welfare and the competition mechanism is not the crux of the harm caused by algorithmic personalized pricing. This indicates that the effect of antitrust law on the regulation of algorithmic personalized pricing strategy is limited.


B. The Key to Regulate Algorithmic Personalized Pricing Is to Ensure Compliance in Its Implementation Process

Algorithmic personalized pricing can damage the rights and interests of consumers, but this damage is not based on the negative impact of the operator’s implementation of algorithmic personalized pricing on consumer welfare, total social welfare, and the competition effect, but on the possible damage to the rights and interests of consumers in the implementation process.


According to the implementation of algorithmic personalized pricing, the operator must complete at least three important steps. First, operators must collect data on consumers’ personal characteristics and behaviors. Second, based on the collected data, the operator should design the calculation logic of the algorithm to estimate the consumer’s payment intention. Third, based on consumers’ estimated willingness to pay, operators choose the best price for each consumer and decide how to implement personalized pricing. According to the idealistic technological presupposition, due to the neutrality of algorithmic technology, algorithm-based personalized pricing should be rational, efficient and trustworthy, and its economic effect should be Pareto-optimal according to the preset route. However, although algorithmic technology is neutral, the design and implementation of algorithmic personalized pricing can be embedded in the operator’s self-logic and subjective intentions. In practice, operators’ interests are not always aligned with those of society and consumers, and their business logic is not always consistent with the needs of rationality, procedural legitimacy and scientific accuracy. To maximize business interests, operators may deliberately ignore or exclude the above risks, which should be avoided in the legal sense in the algorithm to achieve the profit goal of facilitating all potential transactions. At the same time, they may not be able to identify or fully avoid the legal risks of using algorithms for personalized pricing because of the limitations of the current technology. As the implementation of algorithmic personalized pricing involves the core business secrets of operators, whether the technology of algorithmic personalized pricing is neutral and compliant in practice depends largely on the subjective initiative of the operators.


Currently, in the data collection segment, the legality of data acquisition is a prerequisite for operators to implement algorithmic personalized pricing, which involves the protection of specific rights and interests such as consumer privacy and personal information. In recent years, Chinese scholars have conducted extensive research on compliance with operators’ data collection behavior and have reached a consensus. With the promulgation of the Civil Code, Data Security Law, Personal Information Protection Law and relevant local regulations, China has completed basic legal construction for the protection of consumers’ personal data and data security. The issue of compliance with operator data collection has been resolved. However, in the algorithm application and personalized pricing implementation process, regulatory blind spots still exist in China’s existing legal norms and measures.


In the process of algorithm application, operators may experience problems such as algorithm discrimination and unfairness when using algorithms to calculate personalized pricing schemes. As Akiva Miller put it, ‘It is one thing when people are treated differently in the market as a result of their different buying power; it is another thing entirely when people are treated differently as a result of deliberate data-driven judgments by the sellers about the kind of people their clients are.’ Based on this consideration, the basic premise of the US’s implementation of algorithmic personalized pricing for operators lies first in distinguishing between disparate treatment and disparate impact. Disparate treatment means that operators measure different pricing based on consumer data, which, in most cases, is beneficial to both buyers and sellers. However, disparate impact implies that the algorithmic variables that determine personalized pricing are linked to specific factors such as ethnicity, gender and race, which would alienate neutral price discrimination from unequal group-specific discrimination. Thus, in the process of algorithm application, the key issue is to ensure that the algorithm applied to personalized pricing is fair and not related to disparate impact.


In personalized pricing process, how operators implement algorithmic personalized pricing will have a direct impact on consumer rights, such as the right to know and the right to choose, which lead to the ‘information asymmetry’ between operators and consumers. In practice, operators have an informational and technological advantage, not only because the data capture and algorithmic process are too complex for consumers to understand, but also because most jurisdictions have not yet established a mature and systematic regulatory system for the implementation of algorithmic personalized pricing by operators. Consequently, the implementation of algorithmic personalized pricing depends only on the operators themselves. Therefore, whether and how to implement algorithmic personalized pricing depends entirely on operators. In view of consumers’ unfair perceptions and negative social evaluations, operators are more inclined to use consumers’ information disadvantages to hide the fact that they are implementing personalized pricing. As mentioned before, misinformation that consumers believe that they are still under a uniform pricing strategy will lead them to make the wrong purchase, which may lead to a reduction in consumer welfare and undermine their right to know.


As a result, compared with the impact of algorithmic personalized pricing on consumer and social welfare, the implementation process of algorithmic personalized pricing is more worthy of attention and vigilance. The regulatory concept of algorithmic personalized pricing should shift from the outcome effect to process legitimacy. Whether the process of algorithmic personalized pricing meets the requirements of algorithmic justice and protects the legitimate rights of consumers is a key issue in algorithmic personalized pricing rather than the final effect of algorithmic personalized pricing on welfare.


IV. REGULATORY SUGGESTIONS FOR ALGORITHMIC PERSONALIZED PRICING

Without clarifying how algorithmic personalized pricing affects the economic effects of the market and consumer rights, we have no idea what its real crux is. In Part II, the author identifies the impact of algorithmic personalized pricing on consumer welfare, operator welfare, social welfare, and competitive effects from scenario-based analysis, aiming to show that there is a subjective bias in legal research that the implementation of algorithmic personalized pricing may have anti-competitive effects. In Part III, the author points out that the potential legal risk of algorithmic personalized pricing lies in the damage to consumer rights, which is caused by information asymmetry between operators and consumers and the improper operation of operators. As a result, the concept of regulating the implementation of algorithmic personalized pricing strategies by operators has been clarified, the core idea of which is to address the unbalanced position of operators and consumers in the process of algorithmic personalized pricing, solve the regulatory blind spot caused by information asymmetry in the process of algorithmic personalized pricing implemented by operators to ensure compliance, and empower consumers to respond to the pricing strategies of operators in the market economy to ensure that their rights are not compromised through the design of regulatory paths. The corresponding scheme for consumers to respond to the pricing strategy of the operator in the market economy to ensure that their rights will not be harmed, not only to avoid the potential legal hazards of algorithmic personalized pricing but also to ensure the vitality of the market economy.


A. Introduction of the Transparency Principle to Address Information Asymmetry Between Operators and Consumers

The potential legal risk of algorithmic personalized pricing to consumer rights lies in consumers’ unawareness of operators’ implementation of algorithmic personalized pricing. The European Commission noted that consumers are able to react more positively to algorithmic personalized pricing when they are aware that operators are conducting data collection and that personalized pricing is being implemented. Conversely, consumers are less likely to take action to protect them from exploitation if they are unaware of the purpose for which their personal data are being collected or of the existence of algorithmic personalized pricing. This view is in line with the findings of the UK OFT survey, which showed that algorithmic personalization is less likely to cause harm when consumers are aware that personalized pricing strategies are occurring and understand how they work. In Germany, former Chancellor Angela Merkel made public statements calling for commercial platforms to disclose the algorithmic strategies they implement to narrow the information disadvantage between consumers and operators. The principle of transparency is material to the protection of consumer rights and is recognized as the most direct, effective and appropriate way to respond to procedural justice in algorithmic personalized pricing. Specifically, the principle of transparency should include the disclosure of the existence and operation of algorithmic personalized pricing strategies.


On the one hand, the transparency principle requires operators to disclose the existence of an algorithmic personalized pricing strategy. As mentioned above, the implementation of algorithmic personalized pricing by an operator involves two steps: the collection of consumers’ personal data and the implementation of personalized pricing for consumers. According to the Personal Information Protection Law of China and the E-Commerce Law, operators have disclosure obligations in the process of data collection and implementation of algorithmic personalized pricing to protect consumers’ rights to informed consent. To meet the compliance requirements, operators implementing algorithmic personalized pricing should disclose the following information to consumers: first, in the process of personal data collection, operators should inform consumers that the collection of their personal data is the basis for the implementation of personalized pricing strategies, so that consumers clearly know the use scenario and purpose of their personal information. Second, in the consumer purchase process, operators should inform consumers of the fact that algorithmic personalized pricing is being implemented to ensure that consumers make correct purchase decisions based on their wishes.


On the other hand, the transparency principle requires operators to disclose how algorithmic personalized pricing works. The algorithm is the fundamental logic of the business operations of algorithmic personalized pricing. Therefore, the subjective intention embedded in the operator’s programming algorithm affects the final practical effect of algorithmic personalized pricing. Consumers, as the target of algorithmic personalized pricing, could not be able to know that their rights have been infringed upon if they had no way of knowing how the algorithmic personalized pricing operates, let alone demanding that the operator be held liable for the infringement. Only by ensuring that the algorithm for calculating personalized pricing is transparent, or at least transparent to a certain extent, can the discriminatory and unequal aspects of the algorithm be detected. The solution to this problem necessarily involves opening the ‘black box of algorithmic and subjecting them to evaluation and regulation. Meanwhile, the principle of transparency emphasizes the comprehensibility of the algorithmic personalized pricing strategy. Some scholars have pointed out that the transparency of an algorithm is not the same as the understandability of an algorithm. For consumers with non-technical backgrounds, the disclosure of the source code, design variables, and statistical models of algorithms is not useful for understanding where legal risks lie. To address this issue, current legislation has developed two main methods for regulating compliance with algorithmic procedures. One is to disclose the operation of algorithmic personalized pricing to the general consumers. The other is to disclose the algorithmic operation to a specific regulator or an expert committee. For the latter, the designation of a special algorithmic evaluation department to conduct audits can not only effectively supervise the legality of the implementation of algorithmic personalized pricing, but also translate the original boring code evaluation results into plain language that can be understood by the public to indirectly achieve transparency.


B. Activation of the Algorithmic Impact Assessment Mechanism to Ensure Compliance

At present, the United States, the European Union, Canada, Australia, and other countries have introduced the impact assessment mechanism of algorithm (data) implementation in their relevant legislation to monitor whether the operator’s implementation of algorithmic personalized pricing procedures is compliant. US algorithmic legislation considers impact assessment as the most important means for regulating algorithms. The Algorithmic Accountability Act of the United States obliges operators to conduct a systematic impact assessment of algorithms, which includes the accuracy, fairness, discrimination, privacy, and security of algorithmic automated decisions. The Australian Competition and Consumer Commission set up a dedicated branch to proactively monitor the operation of platform algorithms, giving it the authority to require the disclosure of algorithm details. In Canada, the algorithmic impact assessment system is systematically provided in article 6.1 of the Directive on Automated Decision-Making. Article 35 of the EU’s General Data Protection Regulation (GDPR) establishes a ‘data protection impact assessment’, requiring operators who fulfil the conditions to carry out a realization assessment of the risks and impacts before processing personal data. With the research on algorithmic personalized pricing in the past two years, EU scholars have focused on the establishment of an algorithmic impact assessment mechanism under the GDPR system, which can be applied to algorithmic governance by effectively expanding the corresponding provisions of the GDPR on data protection impact assessment. Implementing an algorithmic impact assessment mechanism can be seen as a reverse engineering algorithm that can quickly understand how algorithmic decisions play a role in personalized pricing. Articles 55 and 56 of the Personal Information Protection Law of China provide an assessment system for the impact of personal information protection, requiring operators using personal information for automated decision-making to carry out a risk assessment beforehand. This provision provides a legal basis for the introduction of an impact assessment mechanism for algorithmic personalized pricing; however, it still lacks relevance and specificity based on the behavior of algorithmic personalized pricing. Therefore, in China, the impact assessment mechanism of algorithmic personalized pricing still needs to be further refined in terms of assessment content, regulatory objects, public scope, and so on.


1. The Content of Assessment. — From a technical perspective, an algorithm for personalized pricing is a simple set of instructions constructed by operators to facilitate all potential transaction tasks for a commodity. Its similarity to due process in nature makes its operational process more susceptible to the evaluation and restriction of due process principles than human decision-making behaviors. Thus, the design of an algorithmic process for personalized pricing must be legitimate. As mentioned above, the implementation of algorithmic personalized pricing will impact market competition, consumer rights, social discrimination, and injustice, which requires operators to fully consider the legality of the process of designing algorithms. The implementation of algorithmic personalized pricing by operators should not only meet the requirements of article 56 of the Personal Information Protection Law on consumers’ personal rights and security but also include a full assessment of the legal risks involved in various dimensions, such as the risk possibility of discriminatory consequences of personalized pricing and the possibility of competitive damage. The above analysis of economic effects shows that algorithmic personalized pricing has a positive effect on social welfare, consumer welfare and competition, and that the rationality and justification for the implementation of algorithmic personalized pricing are derived from the substantive justice that it may bring about. These are fundamental reasons why price discrimination is currently permitted in society. If the practical effects of algorithmic personalized pricing do not result in social gains, the operator’s subjective fault in designing the algorithm for personalized pricing deserves closer scrutiny. This is due to the fact that behind the algorithms, there are specific practices of how operators use, game or even circumvent the technology. For example, in existing business practices, operators can either exacerbate or add discrimination through the programming of algorithms, or circumvent discrimination through algorithms. Therefore, whether the operator has proactively taken technical measures during the algorithm design process to avoid or reduce the occurrence of damages and risks should be a key element of ex post facto accountability. Where personalized pricing algorithms may conceal high risks, operators should be able to explain the reasonableness of such high risks and the risk-mitigation measures they have taken.


2. The Target of Regulation. — As mentioned above, the current legislation on the regulation of algorithmic assessment of operators has mainly formed two modes: reporting to the regulator or disclosure to the public. In China, it is more appropriate to adopt a mode of reporting to regulators. On the one hand, consumers are often unable to assess the fairness of the algorithmic personalized pricing process due to their own professional ability and cognitive level. On the other hand, algorithms, as the commercial fundamentals for operators to implement personalized pricing, involve legally protected interests of operators, such as trade secrets, security interests and competition interests. Therefore, adopting a filing system for operators’ core algorithms with the regulatory authorities can not only make the implementation of the algorithmic personalized pricing assessment mechanism more professional and operable, but also strike a balance between commercial interests and consumer protection.


3. The Scope of Disclosure. — The protection of consumers’ right to know and operators’ commercial interests are not in complete conflict. There could be a compromised balance between customers and operators in terms of compliance requirements for the protection of algorithmic personalized pricing. Both the US and the EU have made it clear in their relevant acts that the evaluation mechanism of algorithmic personalized pricing does not require full disclosure of the operator’s algorithmic parameters. The US believes that although the algorithmic mechanism for personalized pricing needs to be mandatory, the results of the assessment do not need to be mandatorily disclosed, and only operators have the right to decide whether to disclose the results of the assessment to the public. The EU has also pointed out in its data protection impact assessment guidelines that it is a good practice to make impact assessment reports available to the public; however, operators are not required to publish the entire assessment results, which can be a summary of the concerns raised in the data protection impact assessment. Thus, to assess the mechanism of algorithmic personalized pricing, it is more feasible for China to adopt a hierarchical approach to releasing information by releasing a summary assessment report to the public, while only releasing detailed and sensitive algorithmic information to the regulator or its designated expert verification team.


C. Implementation of the Opt-In and Opt-Out Models to Safeguard Consumer Rights

Algorithmic personalized pricing is a type of price discrimination in economics, and is a neutral concept in economic areas. Under the circumstance that it will not cause damage to economic effects, social welfare, and consumer welfare, the pricing strategy operators choose to carry out production and business activities, which is the embodiment of management autonomy, and should be protected by law. From the perspective of consumers, in the era of the platform economy, consumers also have the right to choose whether or not to accept the operator’s pricing strategy. Since the operator and the consumer’s access to information or market position is in the obviously unequal situation, the law should be taken accordingly to ensure the realization of the consumer’s right to free choice, which is not an intervention in the operator’s market economic activities, but to ensure that the market players can carry out commercial transactions on an equal footing.


Under China’s existing legal norms, the protection of consumers’ rights to choose is one of the important elements of the E-Commerce Law of China and the Personal Information Protection Law. Article 18 of the E-Commerce Law limits the practice of platform implementation of algorithm personalization, and clearly specifies that e-commerce operators who provide search results for goods or services based on consumer interests, hobbies, consumption habits and other characteristics need to provide options for the consumer that are not targeted at their personal characteristics. Article 12 of the Provisions on the Governance of the Online Information Content Ecosystem issued by the Cyberspace Administration of China also stipulates that network information content platforms using personalized algorithms to push information should set up a recommendation model that complies with the relevant content requirements of these provisions, and establish and improve mechanisms for manual intervention and user self-selection. This provision is similar to article 18 of the E-Commerce Law, which aims to give consumers the right to opt out of personalized recommendations, close to the right to be exempted from automated processing. Articles 13 and 24 of the newly enacted Personal Information Protection Law stipulate the relevant obligations of operators in the personal data collection and automated decision-making stages. It can be seen that at the stage of personal data collection, the Personal Information Protection Law adopts an opt-in model, which takes consumers’ individual consent as the ground for the legality of data processing and imposes substantive requirements on the validity of the consent, i.e., voluntary, explicit, and fully informed. Meanwhile, the opt-out model is adopted during the process of implementing automated decision-making, requiring operators to conduct their business practices by means of automated decision-making while giving consumers the option of not targeting their personal characteristics or providing them with an easy way to refuse. Algorithm-based personalized pricing is a type of automated decision-making by operators. Operators should notify consumers on their platforms or webpages that they are being offered personalized pricing and provide them with the option of choosing between a single pricing strategy or opting out of the personalized pricing strategy.

V. CONCLUSION


The combination of advanced technological tools such as big data and algorithms is changing the way in which companies make commercial and strategic decisions, influencing the original competitive landscape of the market and our real life. While the impact of these new business models, such as algorithmic personalized pricing, on our lives is largely unknown, the development of emerging technologies is undoubtedly associated with significant economic efficiency. From the perspective of social development, both operators and consumers benefit from new, better, and more targeted products and services. In the era of the digital economy, the key to regulation lies not in the creation of new technologies, formats and models, but in the developers and operators who innovate and use emerging technologies and models.

中国法学
中国法学杂志社是《中国法学》及China Legal Science的出版单位。《中国法学》由中国法学会主管主办,是目前国内最权威的法学期刊之一。China Legal Science由中央政法委主管、中国法学会主办。
 最新文章