Datasets:
Dataset Viewer
prompt
stringlengths 1.06k
15.6k
| ori_text
stringlengths 1.02k
15.6k
| reference_list
stringlengths 65
2.27k
| __internal_uuid__
stringlengths 5
5
| Primary_Domain
stringclasses 4
values | Secondary_Domain
stringclasses 17
values | prompt_id
stringlengths 1
3
|
|---|---|---|---|---|---|---|
翻译,英文翻译成中文。不要输出译文以外的内容。
以下是你本次的任务:
The preceding sections have examined various theories on concepts and representations in philosophy, cognitive science, and machine learning, classifying them into four categories: abstractionist, similarity, functional, and invariance approaches. This section compares these views from a meta-perspective, exploring their connections and deriving implications for further studies. By identifying the intersections and differences among these approaches, this discussion section aims to provide a clearer framework for interdisciplinary studies of concepts and representations.
The first axis for comparison is the role of concepts. Arguably, concepts play various roles in classification, learning, communication, problem solving, and so on; but these roles can be further understood through the lens of two major functionalities: descriptive and inferential. In the descriptive use, concepts serve to characterize and summarize data in various formats. Among the approaches discussed so far, the abstractionist and similarity approaches are particularly motivated by these descriptive tasks. The major goal of abstractionists is to classify objects in the hierarchical manner, by constructing a conceptual lattice from a data table or “context” that lists items and their observed properties (Section 2). Given a similar (but possibly continuous) dataset, the similarity approach visualizes mutual relationships between items in a two or more dimensional space based on their attributes and delineates concepts as clustered points that are close to each other (Section 3). These outcomes—lattices or plots—reveal the inherent structure of data not visible in the original format.
Concepts also play a central role in inferential tasks, e.g., for the purpose of predicting future events or possible behaviors of objects. By identifying an object in front of me as a dog, I can anticipate what it will and will not do: it may bark, chase after balls, and sniff around, but probably not climb trees or purr like a cat. Though very simplistic, it is nonetheless a bona fide act of prediction: based on some observable cues like floppy ears, I derive its behavioral characteristics that I have not yet seen, at least with respect to this particular dog in front of me. Such inferential role of concepts is the primary focus of the functional perspective (Section 4). Inferences from one set of attributes to another exploit the information about internal constraints on the possible combinations of attributes. This information, encoded in the functional form, constitutes our understanding of concepts. Alternatively, the invariance approach captures inference from a different perspective, by identifying a concept as a set of trans- formation rules that govern possible changes in the object (Section 5). Such equivariant group actions highlight the inferential role of concepts that allows us to simulate how objects or their perceptions transform in accordance with changes in the environment or the perceiving subject.
The above discussion should not imply that models are inherently connected to or designed for particular roles. Indeed, there is no contradiction in taking a conceptual lattice as representing internal constraints of concepts or using the similarity space for the purpose of prediction, as is quite common in natural language processing. The descriptive/inferential contrast rather concerns the modeling purpose: namely whether one is interested in data themselves, or in some underlying structure of “uniformity of nature” from which data are sampled. In the former context, mathematical models are taken as a sort of descriptive statistics that serve for the economy of thought; while in the latter, they are interpreted as representing a generative mechanism that lies beyond any particular data (Otsuka, 2022). These two desiderata are often in conflict: the more one tries to adjusts one’s concepts as faithful as possible to given data, the more likely they are to overfit the data, resulting in a loss of predictive ability (Forster and Sober, 1994). A good descriptive model needs not be a good inferential model, and vice versa. Therefore, when evaluating a particular model, it is essential to clearly specify the criteria by which it is being evaluated. Our second point of consideration concerns the relationship between concepts and their defining features. In the philosophical and cognitive science literature, concepts have been defined or characterized in terms of familiar and explicit features we use in everyday life, such as color, size, shape, and so on. Accordingly, conceptual hierarchies or similarity maps have been build and evaluated on the basis of a pre-specified set of mostly ostensible features. This represents a “feature-first” approach, where features act as the raw material for thoughts, with concepts being formed by combining or mixing these features. In contrast, features in representation learning are not fixed beforehand, but rather are extracted from data as axes or dimensions that constitute the latent space. In addition, the meaning of these extracted features is not given a priori, but must be determined by unpacking the internal intricate structure of the trained neural network through detailed analysis. This can be seen as a “representation-first” approach, considering representations to be epistemologically prior to features. These two approach faces different challenges. The main challenge of the feature-first approach lies in its empirical adequacy. A common criticism against abstractionism or the classic view of concepts points out that it is simply im- possible to define concepts, such as game or human, through combinations of existing features or traits (Wittgenstein, 1953; Boyd, 1991). Moreover, it is not clear which features should be considered to define concepts. Outcomes of formal concept analysis heavily depend on the list of attributes used to characterize data (Section 2). And Feigenbaum’s bottleneck highlights the difficulty of determining the relevant attributes for the problem at hand. A similar issue arises for prototype and exemplar theorists in determining the appropriate at- tributes/dimensions and their relative importance in constructing the similarity space, which significantly affects similarity judgments (Murphy, 2004).
With its successful applications to a range of empirical problems, the representation- first approach seems to be overcoming all the above issues: deep learning models are able to learn robust representations (with some reservations discussed shortly) that automatically extract relevant features from data, capture complex patterns, and generalize well to new situations. The problem, however, is that these features generally resist intuitive interpretation. One thus needs to read off meanings from trained models, but to do so requires a clear understanding of what one is looking for—that is, what are meaningful features? The idea of disentangled representation discussed in Section 5 is one attempt to explicate what we consider meaningful features of a representation in terms of independent group actions. Understanding features is crucial not just for interpretability but also to enhance model performance, ensure robustness, and improve fair- ness in machine learning applications (Lipton, 2016). It is pointed out that the well-known phenomenon of adversarial attack, where deep learning models mis- classify objects due to the addition of small, often imperceptible perturbations to the input data, is a consequence of the model’s using complex, high-dimensional features that are not necessarily aligned with human-perceptible features (Ilyas et al., 2019). Also, understanding features allows data scientists to identify and mitigate biases in machine learning models by examining how different features influence predictions (Ribeiro et al., 2016). These efforts can be understood as attempts to identify the concepts (representations) used by machines and analyze them into components (features) amenable to human understanding.
The feature-first and representation-first approaches can thus be understood as akin to digging a tunnel from opposite sides. The former starts with a set of explicit features and builds complex concepts in a bottom-up way, while the latter aims to break down given representations into understandable pieces in a top-down fashion. The important challenge in contemporary concept research is to make these approaches meet halfway.
The final, but not least, point concerns the relationship among the four approaches discussed in this paper. In his influential book, Edouard Machery (2009) argued for the disunity of concepts, challenging the traditional view that concepts are a unified phenomenon within cognitive science. His view is that what are labeled “concepts” actually involve heterogeneous mental kinds with different functionalities, purposes, and empirical bases. Be that as it may, mathematical considerations naturally suggest the logical relationships between different conceptual models. Indeed, we have already seen some of such attempts in the present paper. The group-theoretic analysis of disentangled representations can be thought as an attempt to integrate the theoretic aspect of concepts (encoded by group operations) and their similarity-based aspect (represented by a manifold). In Section 3, we have seen some recent works in natural language processing that aim to encode the hierarchical structure of concepts into the word vector space by using non-Euclidean (hyperbolic) spaces (Nickel and Kiela, 2017) or representing words by boxes instead of vectors (Vilnis et al., 2018). If successful, this line of research will reconcile the abstractionist and similarity approaches, which have been considered rivals in both philosophy and cognitive science literature.
Underlying these studies is the overarching theme of the relationship be- tween geometry and algebra. Lattices and groups are algebraic in nature, while metric spaces and manifolds have a clear geometric character. Hence the four approaches discussed in this paper can each be seen as shedding light on the geometric or algebraic aspects of concepts. Kant (1999) was the first to make a clear distinction between, and propose a unification of, these two aspects of human cognition—namely sensibility equipped with a geometric form, and understanding that follows logical and algebraic principles. Over two centuries after Kant, the contemporary machine learning research is trying to integrate the both components to understand and improve the performance of neural networks, with the aid of much more advanced mathematical machinery than those available to Kant, including non-Euclidean geometry, topology, and group or gauge theory (Sanborn et al., 2024). Just as Kant was inspired by Newtonian physics in his time, these developments in machine learning will provide new insights into the philosophical understanding of concepts.
|
The preceding sections have examined various theories on concepts and representations in philosophy, cognitive science, and machine learning, classifying them into four categories: abstractionist, similarity, functional, and invariance approaches. This section compares these views from a meta-perspective, exploring their connections and deriving implications for further studies. By identifying the intersections and differences among these approaches, this discussion section aims to provide a clearer framework for interdisciplinary studies of concepts and representations.
The first axis for comparison is the role of concepts. Arguably, concepts play various roles in classification, learning, communication, problem solving, and so on; but these roles can be further understood through the lens of two major functionalities: descriptive and inferential. In the descriptive use, concepts serve to characterize and summarize data in various formats. Among the approaches discussed so far, the abstractionist and similarity approaches are particularly motivated by these descriptive tasks. The major goal of abstractionists is to classify objects in the hierarchical manner, by constructing a conceptual lattice from a data table or “context” that lists items and their observed properties (Section 2). Given a similar (but possibly continuous) dataset, the similarity approach visualizes mutual relationships between items in a two or more dimensional space based on their attributes and delineates concepts as clustered points that are close to each other (Section 3). These outcomes—lattices or plots—reveal the inherent structure of data not visible in the original format.
Concepts also play a central role in inferential tasks, e.g., for the purpose of predicting future events or possible behaviors of objects. By identifying an object in front of me as a dog, I can anticipate what it will and will not do: it may bark, chase after balls, and sniff around, but probably not climb trees or purr like a cat. Though very simplistic, it is nonetheless a bona fide act of prediction: based on some observable cues like floppy ears, I derive its behavioral characteristics that I have not yet seen, at least with respect to this particular dog in front of me. Such inferential role of concepts is the primary focus of the functional perspective (Section 4). Inferences from one set of attributes to another exploit the information about internal constraints on the possible combinations of attributes. This information, encoded in the functional form, constitutes our understanding of concepts. Alternatively, the invariance approach captures inference from a different perspective, by identifying a concept as a set of trans- formation rules that govern possible changes in the object (Section 5). Such equivariant group actions highlight the inferential role of concepts that allows us to simulate how objects or their perceptions transform in accordance with changes in the environment or the perceiving subject.
The above discussion should not imply that models are inherently connected to or designed for particular roles. Indeed, there is no contradiction in taking a conceptual lattice as representing internal constraints of concepts or using the similarity space for the purpose of prediction, as is quite common in natural language processing. The descriptive/inferential contrast rather concerns the modeling purpose: namely whether one is interested in data themselves, or in some underlying structure of “uniformity of nature” from which data are sampled. In the former context, mathematical models are taken as a sort of descriptive statistics that serve for the economy of thought; while in the latter, they are interpreted as representing a generative mechanism that lies beyond any particular data (Otsuka, 2022). These two desiderata are often in conflict: the more one tries to adjusts one’s concepts as faithful as possible to given data, the more likely they are to overfit the data, resulting in a loss of predictive ability (Forster and Sober, 1994). A good descriptive model needs not be a good inferential model, and vice versa. Therefore, when evaluating a particular model, it is essential to clearly specify the criteria by which it is being evaluated. Our second point of consideration concerns the relationship between concepts and their defining features. In the philosophical and cognitive science literature, concepts have been defined or characterized in terms of familiar and explicit features we use in everyday life, such as color, size, shape, and so on. Accordingly, conceptual hierarchies or similarity maps have been build and evaluated on the basis of a pre-specified set of mostly ostensible features. This represents a “feature-first” approach, where features act as the raw material for thoughts, with concepts being formed by combining or mixing these features. In contrast, features in representation learning are not fixed beforehand, but rather are extracted from data as axes or dimensions that constitute the latent space. In addition, the meaning of these extracted features is not given a priori, but must be determined by unpacking the internal intricate structure of the trained neural network through detailed analysis. This can be seen as a “representation-first” approach, considering representations to be epistemologically prior to features. These two approach faces different challenges. The main challenge of the feature-first approach lies in its empirical adequacy. A common criticism against abstractionism or the classic view of concepts points out that it is simply im- possible to define concepts, such as game or human, through combinations of existing features or traits (Wittgenstein, 1953; Boyd, 1991). Moreover, it is not clear which features should be considered to define concepts. Outcomes of formal concept analysis heavily depend on the list of attributes used to characterize data (Section 2). And Feigenbaum’s bottleneck highlights the difficulty of determining the relevant attributes for the problem at hand. A similar issue arises for prototype and exemplar theorists in determining the appropriate at- tributes/dimensions and their relative importance in constructing the similarity space, which significantly affects similarity judgments (Murphy, 2004).
With its successful applications to a range of empirical problems, the representation- first approach seems to be overcoming all the above issues: deep learning models are able to learn robust representations (with some reservations discussed shortly) that automatically extract relevant features from data, capture complex patterns, and generalize well to new situations. The problem, however, is that these features generally resist intuitive interpretation. One thus needs to read off meanings from trained models, but to do so requires a clear understanding of what one is looking for—that is, what are meaningful features? The idea of disentangled representation discussed in Section 5 is one attempt to explicate what we consider meaningful features of a representation in terms of independent group actions. Understanding features is crucial not just for interpretability but also to enhance model performance, ensure robustness, and improve fair- ness in machine learning applications (Lipton, 2016). It is pointed out that the well-known phenomenon of adversarial attack, where deep learning models mis- classify objects due to the addition of small, often imperceptible perturbations to the input data, is a consequence of the model’s using complex, high-dimensional features that are not necessarily aligned with human-perceptible features (Ilyas et al., 2019). Also, understanding features allows data scientists to identify and mitigate biases in machine learning models by examining how different features influence predictions (Ribeiro et al., 2016). These efforts can be understood as attempts to identify the concepts (representations) used by machines and analyze them into components (features) amenable to human understanding.
The feature-first and representation-first approaches can thus be understood as akin to digging a tunnel from opposite sides. The former starts with a set of explicit features and builds complex concepts in a bottom-up way, while the latter aims to break down given representations into understandable pieces in a top-down fashion. The important challenge in contemporary concept research is to make these approaches meet halfway.
The final, but not least, point concerns the relationship among the four approaches discussed in this paper. In his influential book, Edouard Machery (2009) argued for the disunity of concepts, challenging the traditional view that concepts are a unified phenomenon within cognitive science. His view is that what are labeled “concepts” actually involve heterogeneous mental kinds with different functionalities, purposes, and empirical bases. Be that as it may, mathematical considerations naturally suggest the logical relationships between different conceptual models. Indeed, we have already seen some of such attempts in the present paper. The group-theoretic analysis of disentangled representations can be thought as an attempt to integrate the theoretic aspect of concepts (encoded by group operations) and their similarity-based aspect (represented by a manifold). In Section 3, we have seen some recent works in natural language processing that aim to encode the hierarchical structure of concepts into the word vector space by using non-Euclidean (hyperbolic) spaces (Nickel and Kiela, 2017) or representing words by boxes instead of vectors (Vilnis et al., 2018). If successful, this line of research will reconcile the abstractionist and similarity approaches, which have been considered rivals in both philosophy and cognitive science literature.
Underlying these studies is the overarching theme of the relationship be- tween geometry and algebra. Lattices and groups are algebraic in nature, while metric spaces and manifolds have a clear geometric character. Hence the four approaches discussed in this paper can each be seen as shedding light on the geometric or algebraic aspects of concepts. Kant (1999) was the first to make a clear distinction between, and propose a unification of, these two aspects of human cognition—namely sensibility equipped with a geometric form, and understanding that follows logical and algebraic principles. Over two centuries after Kant, the contemporary machine learning research is trying to integrate the both components to understand and improve the performance of neural networks, with the aid of much more advanced mathematical machinery than those available to Kant, including non-Euclidean geometry, topology, and group or gauge theory (Sanborn et al., 2024). Just as Kant was inspired by Newtonian physics in his time, these developments in machine learning will provide new insights into the philosophical understanding of concepts.
|
考点1:“Sensibility”推荐译为“感性”
考点2:“unpacking”推荐译为“解析”,不可译为“拆解”
考点3:“The first axis for comparison is the role of concepts”中“axis”不可直译为“轴”,应为“维度”或“方面”
考点4:“read off meanings”推荐译为“解读含义”“阐释含义”,不可译为“读出含义”
|
0689a
|
学术论文
|
人文科学
|
185
|
翻译,英文翻译成中文。不要输出译文以外的内容。以下是你本次的任务:
1. Why has the global capital market grown so rapidly in recent decades
In recent decades, the global capital market has grown so rapidly because of the rise of privatizations mainly. With private capital flows rising from less than 5 percent of world GDP in 1975 to about 20 percent today, privatizations have significantly increased market liquidity. And also privatization takes a potential role global capital market development.
A. The Rise of Capital Market-Based Finance
Capital market-based finance has in fact been increasing in importance, both absolutely and relative to financial intermediary-based finance, in both developed and developing countries over the past decade. And also capital markets are in fact winning the present and seem likely to dominate the future of corporate finance in developed and developing countries alike.
a. The Stable Role of Commercial Banking in Modern Economies
Ordinary "relationship banking" appears to be (at best) holding its own as a source of corporate financing around the world, and is more likely in decline. The bits of banking that are growing rapidly are those parts that provide high value-added products (especially risk management tools) and provide large-scale syndicated credits to corporate borrowers. During the late-1980s and early-1990s, when Japan and Germany appeared to be outperforming major capital market-oriented countries such as Britain and the US, the academic literature often favored bank-based systems. Examples of this literature include Prowse (1992), Kester (1992), and Porter (1992), while the supporting arguments are summarized in Maher and Andersson (1999) and Tsuru (2000). More recently, however, the weight of opinion has swung strongly in favor of the idea that capital markets have decisive comparative advantages over banks and other financial intermediaries as optimal monitors and financiers of a nation's corporate life. This reassessment has been driven in part by the observation, discussed at length above, that capital markets have been prospering relative to banks for many years now. The repetitive nature--and massive costs--of banking crises in developing and developed countries alike has also convinced many observers that banks are inherently fragile institutions, whose role in corporate finance should be minimized as much and as quickly as possible (Economist (1997, 1999)).
b. The Rapid Growth in Stock Market Capitalization and Trading Volume Since 1983
From 1983 to 2000, this was a period of very rapid growth in the capitalization of markets in every country except Japan. Total world market capitalization increased over ten-fold (to $ 35.0 trillion) between 1983 and 1999, and the total capitalization of the US market increased almost nine-fold (from $ 1.9 trillion to $ 16.6 trillion) over the same period.
c. The Dramatic Growth in Securities Issuance Volume Since 1990
Another way of measuring the rise of capital markets is to examine whether their share of annual corporate financing activity has grown relative to that of other sources of funding. Security offerings by US issuers accounted for two-thirds of the global total throughout 1990-1999, that implies that non-US securities issues in creased from $ 191 billion in 1990 to $ 750 billion in 1998, and then to $ 1.19 trillion in 1999. The surge in non-US issuance volume in 1999 was largely due to the popularity of euro-denominated bond issues, which actually exceeded dollar-denominated bond issues for much of 1999.
d. The Phenomenal Growth in Venture Capital Financing in the United States
One highly specialized, but extremely important type of financing has also grown very rapidly over the past decade, and especially so since 1997. This is venture capital investment by US venture capital partnerships. The fund-raising patterns of these private equity investors are discussed in Gompers and Lerner (1998), and the competitive advantages of US venture capitalists versus those in other developed countries are described in Black and Gilson (1998).
e. The Surge in Mergers and Acquisitions Worldwide
The almost incredible increase in the total volume of merger and acquisition activity that has occurred since 1990. While takeovers have always played an important role in the United States, the rise in M&A (Merger and Acquisition) activity in Europe during the 1990s was even more dramatic. From less than $ 50 billion annually in the late-1980s, the total value of M&A involving a European target reached $ 592 billion in 1998, before more than doubling to $ 1.22 trillion in 1999--rivaling the US total. The global value of M&A activity in 1999 reached $ 3.4 trillion, an astounding 10% of world GDP.
Next I will document that share issue privatizations have truly transformed share ownership patterns of investors in many different countries.
B. Privatization's Impact on Stock and Bond Market Development
We should be careful in inferring causation regarding privatization's impact on market growth, since a shift in ideology or some other exogenous political or economic change might have caused both the privatization and the overall boom.
a. Total Proceeds Raised by Privatization Programs
It is clear that national governments have been among the biggest winners from privatization programs, since these have dramatically increased government revenues, which is clearly one reason the policy has spread so rapidly. As mentioned above, Privatisation International [Gibbon (1998, 2000)] reports that the cumulative value of proceeds raised by privatizing governments exceeded $ 1 trillion sometime during the second half of 1999. As an added benefit, this revenue has come to governments without having to raise taxes or cut other public services.
b. Privatization's Impact on International Investment Banking
All international investment banks compete fiercely for share issue privatization mandates, for two principal reasons. First, because the offerings are so large and so visible--and are almost always designed to help promote the market's capacity to absorb subsequent stock offerings by private companies--these are very prestigious mandates. To date, the large US and British brokerage houses have had the most success in winning advisory and underwriting mandates, though all countries that launch large-scale SIP programs tend to favor local investment banks as "national champions" to handle the domestic share tranche. The second reason banks compete so fiercely for SIP mandates is because they can be extremely profitable. In spite of the fact--documented by Jones, et al (1999) and Ljungqvist, et al (2000)--that SIPs have significantly lower underwriting spreads than private sector offerings, their sheer size and lack of downside price risk make them very lucrative for underwriters.
2. Will this growth continue throughout the 2000s?
As we indicated above, the global capital market has grown so rapidly in recent decades cause of the privatizations rise. Privatizations increased the market liquidity. Now we have already stepped into the 21st century. I believe that the growth will continue for the following reasons. First, most of the south-east Asia countries have recovered from the 1997 financial crisis. For these countries, they now have the capital to do businesses. And they get back on the fast growing track. Second, by the end of 2001, world's biggest developing country, China, has entered the WTO (World Trade Organization). This is real great news. As we all know, today's China takes a serious position in world's economy. Its innovation and opening policy make china keep achieving high GDP growth rate. This drives the global capital market keep growing.
Summary and Conclusions
This essay examines the impact of share issue privatizations (SIPs) on the growth of world capital markets (especially stock markets). I begin by documenting the increasing importance of capital markets, and the declining role of commercial banks, in corporate financial systems around the world. I then show that privatization programs-- particularly those involving public share offerings--have had a dramatic impact both on the development of non-US stock markets and on the participation of individual and institutional investors in those stock markets.
This has told the reason of the fast growth of global capital market. And then I succinctly indicated the continuance of the rapid growth, the great future.
The last but not the least is the recommendation. I can confidently assert that, if executed properly, a series of share issue privatizations can indeed promote the growth of global capital market, which will yield economic and political dividends for many years to come. That means there is a need to encourage the development of SIPs in order to gain growth of global capital market.
|
1. Why has the global capital market grown so rapidly in recent decades
In recent decades, the global capital market has grown so rapidly because of the rise of privatizations mainly. With private capital flows rising from less than 5 percent of world GDP in 1975 to about 20 percent today, privatizations have significantly increased market liquidity. And also privatization takes a potential role global capital market development.
A. The Rise of Capital Market-Based Finance
Capital market-based finance has in fact been increasing in importance, both absolutely and relative to financial intermediary-based finance, in both developed and developing countries over the past decade. And also capital markets are in fact winning the present and seem likely to dominate the future of corporate finance in developed and developing countries alike.
a. The Stable Role of Commercial Banking in Modern Economies
Ordinary "relationship banking" appears to be (at best) holding its own as a source of corporate financing around the world, and is more likely in decline. The bits of banking that are growing rapidly are those parts that provide high value-added products (especially risk management tools) and provide large-scale syndicated credits to corporate borrowers. During the late-1980s and early-1990s, when Japan and Germany appeared to be outperforming major capital market-oriented countries such as Britain and the US, the academic literature often favored bank-based systems. Examples of this literature include Prowse (1992), Kester (1992), and Porter (1992), while the supporting arguments are summarized in Maher and Andersson (1999) and Tsuru (2000). More recently, however, the weight of opinion has swung strongly in favor of the idea that capital markets have decisive comparative advantages over banks and other financial intermediaries as optimal monitors and financiers of a nation's corporate life. This reassessment has been driven in part by the observation, discussed at length above, that capital markets have been prospering relative to banks for many years now. The repetitive nature--and massive costs--of banking crises in developing and developed countries alike has also convinced many observers that banks are inherently fragile institutions, whose role in corporate finance should be minimized as much and as quickly as possible (Economist (1997, 1999)).
b. The Rapid Growth in Stock Market Capitalization and Trading Volume Since 1983
From 1983 to 2000, this was a period of very rapid growth in the capitalization of markets in every country except Japan. Total world market capitalization increased over ten-fold (to $ 35.0 trillion) between 1983 and 1999, and the total capitalization of the US market increased almost nine-fold (from $ 1.9 trillion to $ 16.6 trillion) over the same period.
c. The Dramatic Growth in Securities Issuance Volume Since 1990
Another way of measuring the rise of capital markets is to examine whether their share of annual corporate financing activity has grown relative to that of other sources of funding. Security offerings by US issuers accounted for two-thirds of the global total throughout 1990-1999, that implies that non-US securities issues in creased from $ 191 billion in 1990 to $ 750 billion in 1998, and then to $ 1.19 trillion in 1999. The surge in non-US issuance volume in 1999 was largely due to the popularity of euro-denominated bond issues, which actually exceeded dollar-denominated bond issues for much of 1999.
d. The Phenomenal Growth in Venture Capital Financing in the United States
One highly specialized, but extremely important type of financing has also grown very rapidly over the past decade, and especially so since 1997. This is venture capital investment by US venture capital partnerships. The fund-raising patterns of these private equity investors are discussed in Gompers and Lerner (1998), and the competitive advantages of US venture capitalists versus those in other developed countries are described in Black and Gilson (1998).
e. The Surge in Mergers and Acquisitions Worldwide
The almost incredible increase in the total volume of merger and acquisition activity that has occurred since 1990. While takeovers have always played an important role in the United States, the rise in M&A (Merger and Acquisition) activity in Europe during the 1990s was even more dramatic. From less than $ 50 billion annually in the late-1980s, the total value of M&A involving a European target reached $ 592 billion in 1998, before more than doubling to $ 1.22 trillion in 1999--rivaling the US total. The global value of M&A activity in 1999 reached $ 3.4 trillion, an astounding 10% of world GDP.
Next I will document that share issue privatizations have truly transformed share ownership patterns of investors in many different countries.
B. Privatization's Impact on Stock and Bond Market Development
We should be careful in inferring causation regarding privatization's impact on market growth, since a shift in ideology or some other exogenous political or economic change might have caused both the privatization and the overall boom.
a. Total Proceeds Raised by Privatization Programs
It is clear that national governments have been among the biggest winners from privatization programs, since these have dramatically increased government revenues, which is clearly one reason the policy has spread so rapidly. As mentioned above, Privatisation International [Gibbon (1998, 2000)] reports that the cumulative value of proceeds raised by privatizing governments exceeded $ 1 trillion sometime during the second half of 1999. As an added benefit, this revenue has come to governments without having to raise taxes or cut other public services.
b. Privatization's Impact on International Investment Banking
All international investment banks compete fiercely for share issue privatization mandates, for two principal reasons. First, because the offerings are so large and so visible--and are almost always designed to help promote the market's capacity to absorb subsequent stock offerings by private companies--these are very prestigious mandates. To date, the large US and British brokerage houses have had the most success in winning advisory and underwriting mandates, though all countries that launch large-scale SIP programs tend to favor local investment banks as "national champions" to handle the domestic share tranche. The second reason banks compete so fiercely for SIP mandates is because they can be extremely profitable. In spite of the fact--documented by Jones, et al (1999) and Ljungqvist, et al (2000)--that SIPs have significantly lower underwriting spreads than private sector offerings, their sheer size and lack of downside price risk make them very lucrative for underwriters.
2. Will this growth continue throughout the 2000s?
As we indicated above, the global capital market has grown so rapidly in recent decades cause of the privatizations rise. Privatizations increased the market liquidity. Now we have already stepped into the 21st century. I believe that the growth will continue for the following reasons. First, most of the south-east Asia countries have recovered from the 1997 financial crisis. For these countries, they now have the capital to do businesses. And they get back on the fast growing track. Second, by the end of 2001, world's biggest developing country, China, has entered the WTO (World Trade Organization). This is real great news. As we all know, today's China takes a serious position in world's economy. Its innovation and opening policy make china keep achieving high GDP growth rate. This drives the global capital market keep growing.
Summary and Conclusions
This essay examines the impact of share issue privatizations (SIPs) on the growth of world capital markets (especially stock markets). I begin by documenting the increasing importance of capital markets, and the declining role of commercial banks, in corporate financial systems around the world. I then show that privatization programs-- particularly those involving public share offerings--have had a dramatic impact both on the development of non-US stock markets and on the participation of individual and institutional investors in those stock markets.
This has told the reason of the fast growth of global capital market. And then I succinctly indicated the continuance of the rapid growth, the great future.
The last but not the least is the recommendation. I can confidently assert that, if executed properly, a series of share issue privatizations can indeed promote the growth of global capital market, which will yield economic and political dividends for many years to come. That means there is a need to encourage the development of SIPs in order to gain growth of global capital market.
|
考点1:“market-based finance”推荐译为“市场主导式金融”
考点2:“The bits of banking” 推荐译为“银行业务”
考点3:“intermediary-based finance”应译为“以中介为基础的金融”
考点4:“syndicated credits”应译为“银团贷款”
考点5:“the weight of opinion” 应译为“主流观点”
考点6:“proceeds”应译为“单笔交易的收入所得”
考点7:“offerings”应译为“发行”
考点8:“prestigious mandates” 应译为“顶级授权”
考点9:“brokerage houses”应译为“证券经纪商”
考点10:“SIP”应译为“股票发行私有化”
|
08ce3
|
学术论文
|
社会科学
|
48
|
翻译,中文翻译成英文。不要输出译文以外的内容。
以下是你本次的任务:
近年来,“替代两小时户外”的哺光仪、“模拟阳光”的大路灯等产品层出不穷。这些产品能否切实改善视力?背后是否暗藏风险?第30个全国“爱眼日”到来之际,听听权威专家的解读与建议。
三分钟哺光能否替代两小时户外?
哺光仪,也称重复低强度红光(RLRL),是一种以激光为光源照射眼睛,用于近视控制或弱视治疗的医疗设备产品。
记者在某电商平台上输入“哺光仪”搜索发现,相关产品品类众多,最高价格近4000元,有商家还提供800元/月的租赁使用服务。
2023年,因哺光仪不规范使用造成某12岁女童视网膜黄斑损伤,导致视力永久性受损。同年6月,国家药监局发布通知,将激光近视弱视治疗仪类产品划分为第三类医疗器械,并给予企业和市场一年过渡期。这意味着在2024年7月1日之后,企业生产、销售哺光仪,须具有第三类医疗器械注册证和生产许可证。
今年4月,北京大学人民医院、北京同仁医院有关专家在国际知名眼科期刊共同发表名为《近视儿童重复低强度红光治疗后视锥细胞密度的变化》的论文,指出以激光作为光源对儿童眼睛进行照射以防控近视,有引发视锥光感受器受损的风险。
中山大学孙逸仙纪念医院眼科副主任医师张一弛说,哺光仪目前临床研究观察最长时间为一年。部分孩子使用后眼轴增长确有所控制,但发生机制尚不明确,长期暴露情况下安全性、有效性也有待观察。
首都医科大学附属北京康复医院眼科主任刘莹说,部分家长因孩子近视进展或眼轴增长过快而选择使用哺光仪,也有一些家长认为孩子度数不严重,想用三分钟哺光替代两小时户外,这些想法并不可取。
这位专家表示,户外活动作为近视防控方案的循证医学证据在全球的观察时间更长、数据更多。建议家长们理性评估风险与收益,优先选择证据充分的防控手段。
大路灯能“模拟自然光”?
“相比普通台灯,大路灯的室内照明光线分布相对均匀,能够减少阴影和暗区,照射范围也更广,在一定程度上有助于减轻视疲劳,但并不足以单独作为一种近视防控方法。”刘莹说,与按照三类医疗器械管理的哺光仪不同,大路灯本质上是一种灯具。
浙江大学眼科医院视光中心主任倪海龙表示,万物生长靠太阳,晴朗白天的太阳光照度可达10万勒克斯(lux),远超能提供1000lux左右光照强度的所谓大路灯。“同时,大路灯也无法替代户外光刺激视网膜分泌多巴胺的关键作用。”他说。
很多临床医生都在门诊中遇到家长请求推荐灯具产品。不少家长表示,大路灯“参数眼花缭乱、价格五花八门”,“缺乏行业标准、质量良莠不齐”,购买后发现部分低价产品夸大宣传。
《近视防治指南(2024年版)》明确,读写应在采光良好、照明充足的环境中进行,桌面的平均照度值不应低于300lux。国家标准《读写作业台灯性能要求》对灯具色温、显色指数、照度、视网膜蓝光危害和闪烁等多项指标提出要求。
专家强调,近视的发展受环境因素、遗传因素等共同影响,采光照明只是其中一个方面。选购灯具应优先参照国家出台的相关标准。
改善整体光环境和用眼习惯,避免依赖单一技术手段
倪海龙强调,近视防控的关键仍在于一增一减,即增加户外活动,减少近距离用眼负担,同时可辅以改善光环境及用眼习惯,要打组合拳,而非依赖单一技术手段。
专家建议要科学看待人工光源,辅助工具不可替代自然光照和基础防控。
“不论是儿童青少年还是成年人,自然光作为视觉的核心保护因素,其作用不可替代。”刘莹说,人工光源的应用需以“安全、循证”为原则,避免因商业营销或焦虑心理陷入误区。
近年来,党和国家高度重视青少年近视防控,部署三级干预措施。一级预防包括从婴幼儿期(2岁半起)就定期筛查视力,避免过早接触电子产品;二级预防包括通过户外活动、用眼习惯调整降低近视风险;三级预防包括采用离焦眼镜、角膜塑形镜及低浓度阿托品等医学手段延缓近视加深。
专家表示,近视防控是个系统工程,需要全社会行动起来,关注全生命周期的用眼健康。
|
近年来,“替代两小时户外”的哺光仪、“模拟阳光”的大路灯等产品层出不穷。这些产品能否切实改善视力?背后是否暗藏风险?第30个全国“爱眼日”到来之际,听听权威专家的解读与建议。
三分钟哺光能否替代两小时户外?
哺光仪,也称重复低强度红光(RLRL),是一种以激光为光源照射眼睛,用于近视控制或弱视治疗的医疗设备产品。
记者在某电商平台上输入“哺光仪”搜索发现,相关产品品类众多,最高价格近4000元,有商家还提供800元/月的租赁使用服务。
2023年,因哺光仪不规范使用造成某12岁女童视网膜黄斑损伤,导致视力永久性受损。同年6月,国家药监局发布通知,将激光近视弱视治疗仪类产品划分为第三类医疗器械,并给予企业和市场一年过渡期。这意味着在2024年7月1日之后,企业生产、销售哺光仪,须具有第三类医疗器械注册证和生产许可证。
今年4月,北京大学人民医院、北京同仁医院有关专家在国际知名眼科期刊共同发表名为《近视儿童重复低强度红光治疗后视锥细胞密度的变化》的论文,指出以激光作为光源对儿童眼睛进行照射以防控近视,有引发视锥光感受器受损的风险。
中山大学孙逸仙纪念医院眼科副主任医师张一弛说,哺光仪目前临床研究观察最长时间为一年。部分孩子使用后眼轴增长确有所控制,但发生机制尚不明确,长期暴露情况下安全性、有效性也有待观察。
首都医科大学附属北京康复医院眼科主任刘莹说,部分家长因孩子近视进展或眼轴增长过快而选择使用哺光仪,也有一些家长认为孩子度数不严重,想用三分钟哺光替代两小时户外,这些想法并不可取。
这位专家表示,户外活动作为近视防控方案的循证医学证据在全球的观察时间更长、数据更多。建议家长们理性评估风险与收益,优先选择证据充分的防控手段。
大路灯能“模拟自然光”?
“相比普通台灯,大路灯的室内照明光线分布相对均匀,能够减少阴影和暗区,照射范围也更广,在一定程度上有助于减轻视疲劳,但并不足以单独作为一种近视防控方法。”刘莹说,与按照三类医疗器械管理的哺光仪不同,大路灯本质上是一种灯具。
浙江大学眼科医院视光中心主任倪海龙表示,万物生长靠太阳,晴朗白天的太阳光照度可达10万勒克斯(lux),远超能提供1000lux左右光照强度的所谓大路灯。“同时,大路灯也无法替代户外光刺激视网膜分泌多巴胺的关键作用。”他说。
很多临床医生都在门诊中遇到家长请求推荐灯具产品。不少家长表示,大路灯“参数眼花缭乱、价格五花八门”,“缺乏行业标准、质量良莠不齐”,购买后发现部分低价产品夸大宣传。
《近视防治指南(2024年版)》明确,读写应在采光良好、照明充足的环境中进行,桌面的平均照度值不应低于300lux。国家标准《读写作业台灯性能要求》对灯具色温、显色指数、照度、视网膜蓝光危害和闪烁等多项指标提出要求。
专家强调,近视的发展受环境因素、遗传因素等共同影响,采光照明只是其中一个方面。选购灯具应优先参照国家出台的相关标准。
改善整体光环境和用眼习惯,避免依赖单一技术手段
倪海龙强调,近视防控的关键仍在于一增一减,即增加户外活动,减少近距离用眼负担,同时可辅以改善光环境及用眼习惯,要打组合拳,而非依赖单一技术手段。
专家建议要科学看待人工光源,辅助工具不可替代自然光照和基础防控。
“不论是儿童青少年还是成年人,自然光作为视觉的核心保护因素,其作用不可替代。”刘莹说,人工光源的应用需以“安全、循证”为原则,避免因商业营销或焦虑心理陷入误区。
近年来,党和国家高度重视青少年近视防控,部署三级干预措施。一级预防包括从婴幼儿期(2岁半起)就定期筛查视力,避免过早接触电子产品;二级预防包括通过户外活动、用眼习惯调整降低近视风险;三级预防包括采用离焦眼镜、角膜塑形镜及低浓度阿托品等医学手段延缓近视加深。
专家表示,近视防控是个系统工程,需要全社会行动起来,关注全生命周期的用眼健康。
|
考点 1: "重复低强度红光(RLRL)" 应译为 "repeated low-level red light (RLRL)"
考点 2: "以激光为光源照射眼睛" 应译为 "laser-based ocular irradiation"
考点 3: "国家药监局" 应译为 "National Medical Products Administration (NMPA)"
考点 4: "注册证和生产许可证" 应译为 "registration certificate and production license"
考点 5: "视锥细胞密度" 应译为 "cone cell density"
考点 6: "视锥光感受器受损" 应译为 "cone photoreceptor damage"
考点 7: "质量良莠不齐" 应译为 "uneven product quality"
考点 8: "《近视防治指南(2024 年版)》" 应译为 "Guidelines for Myopia Prevention and Control (2024 edition)"
考点 9: "国家标准《读写作业台灯性能要求》" 应译为 "National Standard: Performance Requirements for Reading and Writing Desk Lamps"
考点 10: "一增一减" 应译为 "'one increase, one decrease"'strategy"
考点 11: "组合拳" 应译为 "a combination strategy" / "multi-pronged approach"
考点 12: "青少年近视防控" 应译为 "juvenile myopia prevention and control"
考点 13: "离焦眼镜" 应译为 "defocus spectacles"
考点14: “全国爱眼日”应译为“Sight Day”
|
0a0f2
|
新闻资讯
|
新闻报道
|
96
|
翻译,中文翻译成英文。不要输出译文以外的内容。
以下是你本次的任务:
国家金融监督管理总局是中国负责金融行业统一监督管理的机构,主要职责包括强化机构监管、行为监管、功能监管等,以维护金融业的合法和稳健运行。该局还参与金融业改革开放和监管有效性相关问题的研究,并拟订相关法律法规草案。近期,该局还修订了《货币经纪公司管理办法》,并向社会公开征求意见,以加强对货币经纪公司的监管。此外,局长李云泽在新闻发布会上表示,将推出八项增量政策,以支持经济回升。
2025年6月,为加强商业银行的市场风险管理,国家金融监督管理总局根据《中华人民共和国银行业监督管理法》《中华人民共和国商业银行法》以及其他有关法律和行政法规,制定了《商业银行市场风险管理办法》,并印发给各金融监管局,各政策性银行、大型银行、股份制银行、外资银行、直销银行、金融资产管理公司、金融资产投资公司。
《商业银行市场风险管理办法》(下文简称《规则》)共五章四十三条,主要涵盖以下几个方面:首先,明确市场风险定义。《规则》明确了适用范围,明确排除了银行账簿中的利率风险,并加强了与《商业银行资本管理办法》和《银行业金融机构全面风险管理指引》的协调一致。其次,强调改进市场风险治理框架。规则阐明了董事会、监事会和高级管理层的责任,界定了三道防线具体范围和职责,并强调银行需要在集团层面加强市场风险管理。 第三,细化市场风险管理要求。规则要求银行进行端到端的市场风险管理,并详细规定了风险识别、计量、监测、控制和报告的总体要求。规则还改进了内部模型的定义,并提高了对模型治理和压力测试的要求,确保与当前市场风险计量框架和管理实践保持一致。未来,国家金融监督管理总局将加强监管和指导,确保规则的有效实施,并将指导银行提升其市场风险管理能力。
同时,国家金融监督管理总局还具有对于中国大陆境内金融机构评级的职责,例如,同样在2025年6月,为了积极参与国际保险集团的监管治理,维护全球金融市场的稳定,推动中国保险业高水平对外开放,国家金融监督管理总局根据国际保险监督协会发布的《国际保险集团监管共同框架》,对中国具有国际影响力的保险集团进行了评估和认定。经评估,中国再保险(集团)股份有限公司(简称“中国再保”)被认定为具有国际影响力的保险集团。今后,国家金融监督管理总局将指导中国再保按照《国际保险集团监管共同框架》进一步健全其风险管理体系,持续提升经营管理能力和国际竞争力。
提到再保险,可能很多人会觉得陌生。再保险也称分保,是保险人在原保险合同的基础上,通过签订分保合同,将其所承保的部分风险和责任向其他保险人进行保险的行为。再保险作为“保险的保险”,对于保障保险市场安全,为直接保险公司分散赔付风险、扩大承保能力和巨灾保障功能,并辅助保险市场调控以及强化行业风险管理发挥了重要的作用。以中国再保为例,中国再保险(集团)股份有限公司(简称“中国再保”)由中华人民共和国财政部和中央汇金投资有限责任公司发起设立,注册资本人民币42,479,808,085元,其中财政部持股11.45%,中央汇金投资有限责任公司持股71.56%。中国再保源于1949年10月成立的中国人民保险公司,2007年10月整体改制为股份有限公司。目前,中国再保主要控股7家境内子公司:中国财产再保险有限责任公司(简称“中再产险”)、中国人寿再保险有限责任公司(简称“中再寿险”)、中国大地财产保险股份有限公司等。
截止2024年11月,中国大陆境内15家再保险公司中有9家为外资再保险公司,分别是慕尼黑再保险公司北京分公司、德国通用再保险股份公司上海分公司、RGA美国再保险公司上海分公司、大韩再保险公司上海分公司、法国再保险公司北京分公司、汉诺威再保险股份公司上海分公司、瑞士再保险股份有限公司北京分公司、信利再保险(中国)有限公司、曼福再保险公司北京分公司;6家为内资机构分别是中国农业再保险股份有限公司、人保再保险股份有限公司、太平再保险(中国)有限公司、中国财产再保险有限责任公司、中国人寿再保险有限责任公司。
随着我国经济的快速增长和保险行业的不断发展,中国再保险市场专业主体逐渐丰富,保费规模和市场份额稳步扩大。但也要看到对标国际发达再保险市场水平,我国再保险市场还存在一定差距。业内人士建议,内资再保险公司可借助保险科技手段,加强自身的风险管理能力,更好地满足直保公司的服务需求。同时,加大资本投入,增强资本实力,提升自身的承保能力和抗风险能力。此外,加强国际合作与交流,学习国际先进的保险专业技术,利用国际市场进行风险分散,提升自己的承保能力与盈利能力。
|
国家金融监督管理总局是中国负责金融行业统一监督管理的机构,主要职责包括强化机构监管、行为监管、功能监管等,以维护金融业的合法和稳健运行。该局还参与金融业改革开放和监管有效性相关问题的研究,并拟订相关法律法规草案。近期,该局还修订了《货币经纪公司管理办法》,并向社会公开征求意见,以加强对货币经纪公司的监管。此外,局长李云泽在新闻发布会上表示,将推出八项增量政策,以支持经济回升。
2025年6月,为加强商业银行的市场风险管理,国家金融监督管理总局根据《中华人民共和国银行业监督管理法》《中华人民共和国商业银行法》以及其他有关法律和行政法规,制定了《商业银行市场风险管理办法》,并印发给各金融监管局,各政策性银行、大型银行、股份制银行、外资银行、直销银行、金融资产管理公司、金融资产投资公司。
《商业银行市场风险管理办法》(下文简称《规则》)共五章四十三条,主要涵盖以下几个方面:首先,明确市场风险定义。《规则》明确了适用范围,明确排除了银行账簿中的利率风险,并加强了与《商业银行资本管理办法》和《银行业金融机构全面风险管理指引》的协调一致。其次,强调改进市场风险治理框架。规则阐明了董事会、监事会和高级管理层的责任,界定了三道防线具体范围和职责,并强调银行需要在集团层面加强市场风险管理。 第三,细化市场风险管理要求。规则要求银行进行端到端的市场风险管理,并详细规定了风险识别、计量、监测、控制和报告的总体要求。规则还改进了内部模型的定义,并提高了对模型治理和压力测试的要求,确保与当前市场风险计量框架和管理实践保持一致。未来,国家金融监督管理总局将加强监管和指导,确保规则的有效实施,并将指导银行提升其市场风险管理能力。
同时,国家金融监督管理总局还具有对于中国大陆境内金融机构评级的职责,例如,同样在2025年6月,为了积极参与国际保险集团的监管治理,维护全球金融市场的稳定,推动中国保险业高水平对外开放,国家金融监督管理总局根据国际保险监督协会发布的《国际保险集团监管共同框架》,对中国具有国际影响力的保险集团进行了评估和认定。经评估,中国再保险(集团)股份有限公司(简称“中国再保”)被认定为具有国际影响力的保险集团。今后,国家金融监督管理总局将指导中国再保按照《国际保险集团监管共同框架》进一步健全其风险管理体系,持续提升经营管理能力和国际竞争力。
提到再保险,可能很多人会觉得陌生。再保险也称分保,是保险人在原保险合同的基础上,通过签订分保合同,将其所承保的部分风险和责任向其他保险人进行保险的行为。再保险作为“保险的保险”,对于保障保险市场安全,为直接保险公司分散赔付风险、扩大承保能力和巨灾保障功能,并辅助保险市场调控以及强化行业风险管理发挥了重要的作用。以中国再保为例,中国再保险(集团)股份有限公司(简称“中国再保”)由中华人民共和国财政部和中央汇金投资有限责任公司发起设立,注册资本人民币42,479,808,085元,其中财政部持股11.45%,中央汇金投资有限责任公司持股71.56%。中国再保源于1949年10月成立的中国人民保险公司,2007年10月整体改制为股份有限公司。目前,中国再保主要控股7家境内子公司:中国财产再保险有限责任公司(简称“中再产险”)、中国人寿再保险有限责任公司(简称“中再寿险”)、中国大地财产保险股份有限公司等。
截止2024年11月,中国大陆境内15家再保险公司中有9家为外资再保险公司,分别是慕尼黑再保险公司北京分公司、德国通用再保险股份公司上海分公司、RGA美国再保险公司上海分公司、大韩再保险公司上海分公司、法国再保险公司北京分公司、汉诺威再保险股份公司上海分公司、瑞士再保险股份有限公司北京分公司、信利再保险(中国)有限公司、曼福再保险公司北京分公司;6家为内资机构分别是中国农业再保险股份有限公司、人保再保险股份有限公司、太平再保险(中国)有限公司、中国财产再保险有限责任公司、中国人寿再保险有限责任公司。
随着我国经济的快速增长和保险行业的不断发展,中国再保险市场专业主体逐渐丰富,保费规模和市场份额稳步扩大。但也要看到对标国际发达再保险市场水平,我国再保险市场还存在一定差距。业内人士建议,内资再保险公司可借助保险科技手段,加强自身的风险管理能力,更好地满足直保公司的服务需求。同时,加大资本投入,增强资本实力,提升自身的承保能力和抗风险能力。此外,加强国际合作与交流,学习国际先进的保险专业技术,利用国际市场进行风险分散,提升自己的承保能力与盈利能力。
|
考点1:国家金融监督管理总局只能译为National Financial Regulatory Administration,因为这是固定机构名称
考点2:《中华人民共和国银行业监督管理法》只能为Banking Supervision Law of the People's Republic of China,因为这是法律文件名称
考点3:《中华人民共和国商业银行法》只能译为Law of the People's Republic of China on Commercial Banks,因为这是法律文件名称
考点4:《商业银行资本管理办法》只能译为Administrative Measures for the Capital of Commercial Banks,因为这是法律文件名称
考点5:《国际保险集团监管共同框架》只能译为Common Framework for the Supervision of Internationally Active Insurance Groups,因为这是法律文件名称
考点6:中央汇金投资有限责任公司只能译为Central Huijin Investment Ltd.,因为这是公司名称
考点7:中国人民保险公司只能译为The People's Insurance Company (Group) of China,因为这是公司名称
考点8:中国财产再保险有限责任公司(简称“中再产险”)只能译为China Property & Casualty Reinsurance Company Ltd.(CHINA RE P&C),因为这是公司名称
考点9:中国人寿再保险有限责任公司(简称“中再寿险”)只能译为China Life Reinsurance Company Ltd. (China Re Life),因为这是公司名称
考点10:中国大地财产保险股份有限公司只能译为China Continent Property & Casualty Insurance Co., Ltd (CCIC),因为这是公司名称
考点11:德国通用再保险股份公司上海分公司只能译为General Reinsurance AG Shanghai Branch,因为这是公司名称
考点12:RGA美国再保险公司上海分公司只能译为RGA Reinsurance Company Shanghai Branch,因为这是公司名称
考点13:大韩再保险公司上海分公司只能译为Korean Reinsurance Company, Shanghai Branch 或 Korean Reinsurance Company Shanghai Branch,因为这是公司名称
考点14:法国再保险公司北京分公司只能译为SCOR SE Beijing Branch,因为这是公司名称
考点15:汉诺威再保险股份公司上海分公司只能译为Hannover Rück SE Shanghai Branch,因为这是公司名称
考点16:瑞士再保险股份有限公司北京分公司只能译为Swiss Reinsurance Company Ltd. Beijing Branch,因为这是公司名称
考点17:曼福再保险公司北京分公司只能译为MAPFRE RE Beijing Branch,因为这是公司名称
考点18:中国农业再保险股份有限公司只能译为China Agricultural Reinsurance Co., Ltd.,因为这是公司名称
考点19:人保再保险股份有限公司只能译为PICC Reinsurance Company Limited,因为这是公司名称
考点20:太平再保险(中国)有限公司只能译为Taiping Reinsurance(China)Company Limited,因为这是公司名称
|
0c820
|
垂类场景
|
金融
|
198
|
翻译,中文翻译成英文。不要输出译文以外的内容。
以下是你本次的任务:
与过去相比,如今我们与他人交往的方式真是空前多样。曾经,我们只能依靠面对面交谈,但在过去的几千年中,新的交往技术不断被创造出来。数字时代的独特之处,便在于使人与人相联系的技术中介经历了快速转型。在面对面交谈、固定电话、邮政信件等传统交往方式的基础上,我们拥有了电子邮件、移动电话、短信、即时通信、网聊、留言板、社交网络、照片分享、视频分享、多人在线游戏等诸多新型交往方式。不过,在面对新媒体时,人们也时常感到困惑。在这个创新与扩散日新月异的时代,我们自然会关心这些新型交往方式对人际关系的影响。
面对层出不穷的新媒体,我们往往有两种反应: 一些人对人际交往的浅薄化表示担忧,对于很多人而言,日益频繁的中介化互动似乎威胁到了人际关系的神圣性;另一些人则认为, 新媒体为我们创造了更多与他人建立联系的机会,从而形成了更强大、更多样化的关系链。这两种观点都有着深厚的文化历史背景,也都印证了同一种观念:数字媒体正在改变社会关系的本质。随着时间的推移,当我们逐渐习惯了新的传播媒体时,人们的态度开始发生微妙的改变。这些媒体的存在被视为理所当然,甚至可以忽略不计。所以,我们思考科技、探索交往,以及反思两者关联的最佳时刻,就是这些新媒体刚刚出现,有关它们的使用准则还未固定之时。
本书围绕数字媒体和数字设备在人际关系中扮演的角色展开,旨在为大家提供一种批判性思考的方式。比起目不暇接、发人深省的故事逸闻,本书更愿意提供一些理论和数据资料,帮助读者理解人际关系中发生的重要变化。我从1990年开始关注这个领域,1991年启动了我第一项有关网络人际关系的研究,1994年起在传播学院开设传播和新技术的相关课程。本书的素材取自我的研究项目、观察以及大量与此相关的其他研究文献,这些素材为评估和理解人际关系的变迁奠定了框架。
当我们试图理解数字媒体和它在我们生活中的位置,以及对我们的个性和人际关系的影响时,会发现各种各样值得思考的议题。在技术的最初发展阶段,它会影响我们如何看待世界、社区、关系和自我。这也会促使社会和文化的重构与反思。卡罗琳·马尔温的一项著名研究考察了19世纪大众科学杂志,她发现在人类历史中,电、电报、电话这些新技术的出现会将人们熟悉的事物陌生化,因此也更容易导致改变。这种改变又会造成人们的焦虑。在古代社会,人们曾为书写的出现担忧;在维多利亚时代,人们害怕电;如今,我们的“焦虑不仅针对电脑,还针对更广泛意义上的技术”。
从远古时代起,这些传播技术出现的根本目的就是能让人们在身体缺席时,仍旧能够传递信息。在19世纪电报发明之前,这种超越空间的能力不可避免地会伴随时间的延迟,信息传达给受众可能要用上几年的时间。随着电报的出现,人类在历史上首次实现了不受距离限制的实时通信。人们也许曾对写作和出版感到震惊,不过,与面临这种全新的、瓦解时空边界的力量时产生的震惊相比,前者只能算是小巫见大巫。毕竟数千年来,人类早已习惯面对面社交,这种以极高速度进行远距离交流的能力,打破了我们根植于集体意识深处的社会认知。数字媒体的出现则造成了更严重的困扰。它们向学者和普通人提出了许多重要的问题:怎样才能既在场又缺席?如果自我不再需要身体作为载体,它将是什么样子?我们为何会在拥有如此多控制权的同时,又丧失了如此多的自由?当个人交流通过大众媒体传播时,意味着什么?当大众传播被用于个人交流时,其到底该被如何定义?“私人”和“公共”如何区分?“真实”到底是什么意思?
肯尼思·格根认为我们正在与“缺席的在场所造成的挑战”做斗争。尽管在物理空间中,我们身边不乏有血有肉的人,但我们依然为自己在“漂浮世界”中与不在场的对象打交道感到忧心忡忡。雪莉·特克尔在《群体性孤独》一书中指出, 我们也许置身某地,但是心思和情感却在别处。例如,你的同伴虽然与你共进晚餐,但却一直低头用手机与别人聊天。那么他的身体既是在场的,但同时又是缺席的,于是,“自我”的本质就变得模糊起来。 “他”到底在哪里?哈拉维在《赛博格宣言》中宣告,人类和机器的界限已经瓦解。不仅如此,自我和身体的界限也陷入不确定性之中。很快,一些人便认为,他们的“真实自我”在网上能得到最佳呈现。异地恋的双方通过电子设备建立和维系关系,可穿戴设备被嵌入我们日常穿着之中,那么,我们又怎么确认真实的自我究竟在何处栖身呢?此外,如果数字媒体中的自我和面对面交流中的自我不一样,甚至相互矛盾时,我们又该怎么办呢?如果一个人在面对面交流中表现得很有教养,在一个网络论坛中咄咄逼人,在另一个论坛中则渴求关爱,那么,哪一个自我才是真实的呢?是否还存在真实的自我?它曾经存在过吗?
|
与过去相比,如今我们与他人交往的方式真是空前多样。曾经,我们只能依靠面对面交谈,但在过去的几千年中,新的交往技术不断被创造出来。数字时代的独特之处,便在于使人与人相联系的技术中介经历了快速转型。在面对面交谈、固定电话、邮政信件等传统交往方式的基础上,我们拥有了电子邮件、移动电话、短信、即时通信、网聊、留言板、社交网络、照片分享、视频分享、多人在线游戏等诸多新型交往方式。不过,在面对新媒体时,人们也时常感到困惑。在这个创新与扩散日新月异的时代,我们自然会关心这些新型交往方式对人际关系的影响。
面对层出不穷的新媒体,我们往往有两种反应: 一些人对人际交往的浅薄化表示担忧,对于很多人而言,日益频繁的中介化互动似乎威胁到了人际关系的神圣性;另一些人则认为, 新媒体为我们创造了更多与他人建立联系的机会,从而形成了更强大、更多样化的关系链。这两种观点都有着深厚的文化历史背景,也都印证了同一种观念:数字媒体正在改变社会关系的本质。随着时间的推移,当我们逐渐习惯了新的传播媒体时,人们的态度开始发生微妙的改变。这些媒体的存在被视为理所当然,甚至可以忽略不计。所以,我们思考科技、探索交往,以及反思两者关联的最佳时刻,就是这些新媒体刚刚出现,有关它们的使用准则还未固定之时。
本书围绕数字媒体和数字设备在人际关系中扮演的角色展开,旨在为大家提供一种批判性思考的方式。比起目不暇接、发人深省的故事逸闻,本书更愿意提供一些理论和数据资料,帮助读者理解人际关系中发生的重要变化。我从1990年开始关注这个领域,1991年启动了我第一项有关网络人际关系的研究,1994年起在传播学院开设传播和新技术的相关课程。本书的素材取自我的研究项目、观察以及大量与此相关的其他研究文献,这些素材为评估和理解人际关系的变迁奠定了框架。
当我们试图理解数字媒体和它在我们生活中的位置,以及对我们的个性和人际关系的影响时,会发现各种各样值得思考的议题。在技术的最初发展阶段,它会影响我们如何看待世界、社区、关系和自我。这也会促使社会和文化的重构与反思。卡罗琳·马尔温的一项著名研究考察了19世纪大众科学杂志,她发现在人类历史中,电、电报、电话这些新技术的出现会将人们熟悉的事物陌生化,因此也更容易导致改变。这种改变又会造成人们的焦虑。在古代社会,人们曾为书写的出现担忧;在维多利亚时代,人们害怕电;如今,我们的“焦虑不仅针对电脑,还针对更广泛意义上的技术”。
从远古时代起,这些传播技术出现的根本目的就是能让人们在身体缺席时,仍旧能够传递信息。在19世纪电报发明之前,这种超越空间的能力不可避免地会伴随时间的延迟,信息传达给受众可能要用上几年的时间。随着电报的出现,人类在历史上首次实现了不受距离限制的实时通信。人们也许曾对写作和出版感到震惊,不过,与面临这种全新的、瓦解时空边界的力量时产生的震惊相比,前者只能算是小巫见大巫。毕竟数千年来,人类早已习惯面对面社交,这种以极高速度进行远距离交流的能力,打破了我们根植于集体意识深处的社会认知。数字媒体的出现则造成了更严重的困扰。它们向学者和普通人提出了许多重要的问题:怎样才能既在场又缺席?如果自我不再需要身体作为载体,它将是什么样子?我们为何会在拥有如此多控制权的同时,又丧失了如此多的自由?当个人交流通过大众媒体传播时,意味着什么?当大众传播被用于个人交流时,其到底该被如何定义?“私人”和“公共”如何区分?“真实”到底是什么意思?
肯尼思·格根认为我们正在与“缺席的在场所造成的挑战”做斗争。尽管在物理空间中,我们身边不乏有血有肉的人,但我们依然为自己在“漂浮世界”中与不在场的对象打交道感到忧心忡忡。雪莉·特克尔在《群体性孤独》一书中指出, 我们也许置身某地,但是心思和情感却在别处。例如,你的同伴虽然与你共进晚餐,但却一直低头用手机与别人聊天。那么他的身体既是在场的,但同时又是缺席的,于是,“自我”的本质就变得模糊起来。 “他”到底在哪里?哈拉维在《赛博格宣言》中宣告,人类和机器的界限已经瓦解。不仅如此,自我和身体的界限也陷入不确定性之中。很快,一些人便认为,他们的“真实自我”在网上能得到最佳呈现。异地恋的双方通过电子设备建立和维系关系,可穿戴设备被嵌入我们日常穿着之中,那么,我们又怎么确认真实的自我究竟在何处栖身呢?此外,如果数字媒体中的自我和面对面交流中的自我不一样,甚至相互矛盾时,我们又该怎么办呢?如果一个人在面对面交流中表现得很有教养,在一个网络论坛中咄咄逼人,在另一个论坛中则渴求关爱,那么,哪一个自我才是真实的呢?是否还存在真实的自我?它曾经存在过吗?
|
考点1:“ 日新月异的时代”应该译为“ an era of rapid change”,准确描述时代的特征
考点2:“何处栖身”推荐译为“where ···to dwell/reside”
|
11af5
|
学术论文
|
社会科学
|
158
|
翻译,英文翻译成中文。不要输出译文以外的内容。以下是你本次的任务:
Abstract
The landscape of mobile gaming has evolved significantly over the years, with profound changes in network reliability and traffic patterns. In the early 2010s, mobile games faced challenges due to unreliable networks and primarily featured asynchronous gameplay. However, in the current era, modern mobile games benefit from robust network connectivity, mirroring PC gaming experiences by relying on persistent connections to game servers. This shift prompted us to conduct an in-depth traffic analysis of two mobile games that represent opposite ends of the genre spectrum: a massively multiplayer game resembling PC MMORPGs with tightly synchronized gameplay, and a single-player puzzle game that incorporates asynchronous social interactions. Surprisingly, both games exhibited remarkably similar traffic footprints; small packets with short inter-packet arrival times, indicating their high expectations for network reliability. This suggests that game developers now prioritize network quality similarly to their PC gaming counterparts. Additionally, our analysis of packet lengths unveiled that recent mobile games predominantly employ short packets dominated by a few key packet types closely tied to player actions, which conforms to observations from PC online games. However, the self-similarity in traffic patterns, a notable feature in PC online games, only partially explains the traffic in mobile games, varying across genres. These findings shed light on the evolving traffic patterns in mobile games and emphasize the need for further research in this dynamic domain.
Keywords: mobile games; traffic analysis; Internet measurement
1. Introduction
Network traffic analysis reports have illustrated that gaming is one of the most popular Internet applications, accounting for an estimated 8–10% of total Internet traffic and ranking as the third biggest traffic source after video streaming and web applications, including social media [1,2,3]. Another report highlights that 92.3% of Internet users access the Internet using mobile phones, with gaming being the most common use for these devices [4]. Given that game servers are typically hosted in cloud data centers and that game service providers incur significant costs for network traffic, understanding mobile game traffic patterns is crucial, not only from an engineering viewpoint but also from a business perspective.
As mobile devices, including smartphones and tablets, have become the prevailing service environment for the gaming industry, game genres that run on mobile devices have also become diverse and complicated. In the early era of networked mobile gaming in the 2010s, most mobile games were asynchronous due to the poor robustness and high cost of mobile networks. In these games, players could interact with other players, such as family members and friends, but they generally played independently rather than tightly together.
On the other hand, today’s mobile network technology, especially in metropolitan areas, has significantly improved in connection stability. A recent survey showed that from October 2022 through March 2023, 5G and 4G networks in the UK exhibited 98.4% and 97.8% connection success rates on average, respectively, when a mobile device becomes active [5]. This suggests that mobile games enjoy much more robust network connectivity, even in the mobile network, and thus such games now rely more on a persistent network connection to the game server. This phenomenal change suggests that the current underlying network traffic patterns might differ from those of the 2010s and from those in the PC gaming environment.
This paper illustrates traffic analysis from two mobile games with global service. The analyzed games represent opposite extremes in terms of their genre. One is strongly synchronized and a massive multiplayer game similar to that of a PC environment, but with an “auto-hunt” feature to allow gameplay without human engagement. The other is less synchronized and primarily played by a single player, although it also features social interactions among players.
The primary contribution of our work is demonstrating that recent mobile games exhibit traffic patterns akin to those in PC games, particularly regarding packet length and inter-packet arrival times, but have differences regarding traffic self-similarity. Our analysis of the games’ traffic traces reveals that current mobile games typically feature notably short inter-packet arrival times, similar to traditional multiplayer games in the PC environment, accompanied by a long-tail distribution due to the intermittent sleep and resume nature of mobile applications. Concerning packet lengths, recent mobile games predominantly use shorter packets, favoring them over aggregated larger packets to conserve network bandwidth. These patterns, observed consistently across different game genres, suggest that modern mobile games are developed with expectations of reliable network connectivity, a notable shift from earlier in the 2010s. This consistency in traffic characteristics, irrespective of game genre, underscores a fundamental similarity in network usage between contemporary mobile and PC gaming platforms. Our analysis, however, discovered that the presence of self-similarity in traffic patterns, identified by previous research on PC online games’ traffic patterns, varies across game genres. This finding encourages further research to model mobile game traffic.
This paper is organized as follows: Section 2 reviews research related to the mobile gaming industry. Section 3 briefly introduces the mobile games analyzed and explains the methods for collecting and anonymizing traffic data. In Section 4, we describe our analysis methodology and present key findings. Finally, Section 5 summarizes these findings and proposes directions for future research.
2. Related Work
Reports have shown that gaming is the most popular activity among mobile device users. Furthermore, gaming activities, which include those on PCs, mobile devices, and consoles, collectively constitute the third largest source of total Internet traffic [1,2,3,4]. Despite its significance in the context of today’s Internet traffic, there is a notable dearth of comprehensive research on mobile game traffic. In contrast, considerable research efforts have been directed towards analyzing PC online game traces, modeling traffic patterns, and optimizing traffic [6,7,8,9,10,11,12,13]. Previous research agrees that PC online games generate highly periodic bursty short packets. In particular, Chen et al. discovered traffic exhibits pronounced periodicity and temporal locality in inter-packet arrival times, attributable to player action patterns by a comprehensive traffic analysis using substantial packet traces from a PC MMORPG game [11]. Feng et al. also observed that the distribution of game session time in PC games is not heavy-tailed, a characteristic stemming from the synchronized nature of gameplay [12]. Henderson et al. explored the network quality of service (QoS) tolerance of game players [14]. Apart from previous research mainly focusing on PC online games, our work fills the gap for mobile games. We confirmed that mobile games share similarities with PC online games regarding traffic patterns while having unique differences at the same time. Chen et al. conducted a preliminary traffic analysis using Pokémon Go, which is a mobile AR game [15], but their preliminary results did not uncover relationships with PC online games.
There is considerable research measuring the real-life performance of various mobile network technologies (i.e., the 3G, 4G, and 5G) [5,16,17,18,19,20,21,22,23]. Based on these research findings, mobile application developers can reasonably assume that even 4G LTE, currently the most prevalent mobile network worldwide, can provide an average round-trip time (RTT) of about 50 ms between a carrier network and a player’s device in metro areas. In particular, Ref. [22] reports that a study conducted in London showed a 5-millisecond latency could be achieved with 99.999% reliability over a 5G network. This research indicates that applications, including games, running on mobile networks can reliably expect lower latency as mobile infrastructure evolves. The finding in mobile network robustness is the main motivation for our research on mobile game traffic patterns.
Kämäräinen et al. investigated factors contributing to end-to-end latency in cloud gaming, where game scenes are rendered at cloud servers. Their research highlights emerging technologies that could enhance gaming experiences demanding high network bandwidth and strict latency constraints [24]. Similarly, Braud et al. conducted research on mobile AR applications, which require computational offloading to cloud servers and are, therefore, also subject to network bandwidth and latency limitations [25]. Despite the disruptive approach of cloud gaming, it was not widely deployed in the mobile gaming industry because of high costs and the limited robustness of mobile networks. This approach is, however, being revitalized by the rapid emergence of virtual reality technologies and technological improvements in mobile networks. Therefore, understanding mobile traffic characteristics through our research can also contribute to cloud gaming.
3. Materials and Methods
Due to the exclusivity or closed nature of the gaming industry, game traffic traces are not readily accessible, even in anonymized form. Game companies hesitate to release traffic traces not only because of concerns over players’ privacy information but also due to fears that the data might be used to compromise the security of their services. This partially explains why there has been limited research conducted on the network traffic characteristics of online/mobile games, despite their significance in terms of traffic usage.
For this limitation, we selected two representative game genres at opposite ends of the spectrum and collected network traffic traces from a mobile game for each genre. The games are globally serviced and independently developed and operated by different game companies. Therefore, the games do not share any specific development methodology or assumptions about underlying network behavior.
|
Abstract
The landscape of mobile gaming has evolved significantly over the years, with profound changes in network reliability and traffic patterns. In the early 2010s, mobile games faced challenges due to unreliable networks and primarily featured asynchronous gameplay. However, in the current era, modern mobile games benefit from robust network connectivity, mirroring PC gaming experiences by relying on persistent connections to game servers. This shift prompted us to conduct an in-depth traffic analysis of two mobile games that represent opposite ends of the genre spectrum: a massively multiplayer game resembling PC MMORPGs with tightly synchronized gameplay, and a single-player puzzle game that incorporates asynchronous social interactions. Surprisingly, both games exhibited remarkably similar traffic footprints; small packets with short inter-packet arrival times, indicating their high expectations for network reliability. This suggests that game developers now prioritize network quality similarly to their PC gaming counterparts. Additionally, our analysis of packet lengths unveiled that recent mobile games predominantly employ short packets dominated by a few key packet types closely tied to player actions, which conforms to observations from PC online games. However, the self-similarity in traffic patterns, a notable feature in PC online games, only partially explains the traffic in mobile games, varying across genres. These findings shed light on the evolving traffic patterns in mobile games and emphasize the need for further research in this dynamic domain.
Keywords: mobile games; traffic analysis; Internet measurement
1. Introduction
Network traffic analysis reports have illustrated that gaming is one of the most popular Internet applications, accounting for an estimated 8–10% of total Internet traffic and ranking as the third biggest traffic source after video streaming and web applications, including social media [1,2,3]. Another report highlights that 92.3% of Internet users access the Internet using mobile phones, with gaming being the most common use for these devices [4]. Given that game servers are typically hosted in cloud data centers and that game service providers incur significant costs for network traffic, understanding mobile game traffic patterns is crucial, not only from an engineering viewpoint but also from a business perspective.
As mobile devices, including smartphones and tablets, have become the prevailing service environment for the gaming industry, game genres that run on mobile devices have also become diverse and complicated. In the early era of networked mobile gaming in the 2010s, most mobile games were asynchronous due to the poor robustness and high cost of mobile networks. In these games, players could interact with other players, such as family members and friends, but they generally played independently rather than tightly together.
On the other hand, today’s mobile network technology, especially in metropolitan areas, has significantly improved in connection stability. A recent survey showed that from October 2022 through March 2023, 5G and 4G networks in the UK exhibited 98.4% and 97.8% connection success rates on average, respectively, when a mobile device becomes active [5]. This suggests that mobile games enjoy much more robust network connectivity, even in the mobile network, and thus such games now rely more on a persistent network connection to the game server. This phenomenal change suggests that the current underlying network traffic patterns might differ from those of the 2010s and from those in the PC gaming environment.
This paper illustrates traffic analysis from two mobile games with global service. The analyzed games represent opposite extremes in terms of their genre. One is strongly synchronized and a massive multiplayer game similar to that of a PC environment, but with an “auto-hunt” feature to allow gameplay without human engagement. The other is less synchronized and primarily played by a single player, although it also features social interactions among players.
The primary contribution of our work is demonstrating that recent mobile games exhibit traffic patterns akin to those in PC games, particularly regarding packet length and inter-packet arrival times, but have differences regarding traffic self-similarity. Our analysis of the games’ traffic traces reveals that current mobile games typically feature notably short inter-packet arrival times, similar to traditional multiplayer games in the PC environment, accompanied by a long-tail distribution due to the intermittent sleep and resume nature of mobile applications. Concerning packet lengths, recent mobile games predominantly use shorter packets, favoring them over aggregated larger packets to conserve network bandwidth. These patterns, observed consistently across different game genres, suggest that modern mobile games are developed with expectations of reliable network connectivity, a notable shift from earlier in the 2010s. This consistency in traffic characteristics, irrespective of game genre, underscores a fundamental similarity in network usage between contemporary mobile and PC gaming platforms. Our analysis, however, discovered that the presence of self-similarity in traffic patterns, identified by previous research on PC online games’ traffic patterns, varies across game genres. This finding encourages further research to model mobile game traffic.
This paper is organized as follows: Section 2 reviews research related to the mobile gaming industry. Section 3 briefly introduces the mobile games analyzed and explains the methods for collecting and anonymizing traffic data. In Section 4, we describe our analysis methodology and present key findings. Finally, Section 5 summarizes these findings and proposes directions for future research.
2. Related Work
Reports have shown that gaming is the most popular activity among mobile device users. Furthermore, gaming activities, which include those on PCs, mobile devices, and consoles, collectively constitute the third largest source of total Internet traffic [1,2,3,4]. Despite its significance in the context of today’s Internet traffic, there is a notable dearth of comprehensive research on mobile game traffic. In contrast, considerable research efforts have been directed towards analyzing PC online game traces, modeling traffic patterns, and optimizing traffic [6,7,8,9,10,11,12,13]. Previous research agrees that PC online games generate highly periodic bursty short packets. In particular, Chen et al. discovered traffic exhibits pronounced periodicity and temporal locality in inter-packet arrival times, attributable to player action patterns by a comprehensive traffic analysis using substantial packet traces from a PC MMORPG game [11]. Feng et al. also observed that the distribution of game session time in PC games is not heavy-tailed, a characteristic stemming from the synchronized nature of gameplay [12]. Henderson et al. explored the network quality of service (QoS) tolerance of game players [14]. Apart from previous research mainly focusing on PC online games, our work fills the gap for mobile games. We confirmed that mobile games share similarities with PC online games regarding traffic patterns while having unique differences at the same time. Chen et al. conducted a preliminary traffic analysis using Pokémon Go, which is a mobile AR game [15], but their preliminary results did not uncover relationships with PC online games.
There is considerable research measuring the real-life performance of various mobile network technologies (i.e., the 3G, 4G, and 5G) [5,16,17,18,19,20,21,22,23]. Based on these research findings, mobile application developers can reasonably assume that even 4G LTE, currently the most prevalent mobile network worldwide, can provide an average round-trip time (RTT) of about 50 ms between a carrier network and a player’s device in metro areas. In particular, Ref. [22] reports that a study conducted in London showed a 5-millisecond latency could be achieved with 99.999% reliability over a 5G network. This research indicates that applications, including games, running on mobile networks can reliably expect lower latency as mobile infrastructure evolves. The finding in mobile network robustness is the main motivation for our research on mobile game traffic patterns.
Kämäräinen et al. investigated factors contributing to end-to-end latency in cloud gaming, where game scenes are rendered at cloud servers. Their research highlights emerging technologies that could enhance gaming experiences demanding high network bandwidth and strict latency constraints [24]. Similarly, Braud et al. conducted research on mobile AR applications, which require computational offloading to cloud servers and are, therefore, also subject to network bandwidth and latency limitations [25]. Despite the disruptive approach of cloud gaming, it was not widely deployed in the mobile gaming industry because of high costs and the limited robustness of mobile networks. This approach is, however, being revitalized by the rapid emergence of virtual reality technologies and technological improvements in mobile networks. Therefore, understanding mobile traffic characteristics through our research can also contribute to cloud gaming.
3. Materials and Methods
Due to the exclusivity or closed nature of the gaming industry, game traffic traces are not readily accessible, even in anonymized form. Game companies hesitate to release traffic traces not only because of concerns over players’ privacy information but also due to fears that the data might be used to compromise the security of their services. This partially explains why there has been limited research conducted on the network traffic characteristics of online/mobile games, despite their significance in terms of traffic usage.
For this limitation, we selected two representative game genres at opposite ends of the spectrum and collected network traffic traces from a mobile game for each genre. The games are globally serviced and independently developed and operated by different game companies. Therefore, the games do not share any specific development methodology or assumptions about underlying network behavior.
|
考点 1:“traffic patterns” 应译为 “流量模式 / 流量特征”
考点 2:“asynchronous gameplay” 应译为 ”异步游戏玩法”
考点 3:“persistent connections” 应译为 “持久连接”
考点 4:“asynchronous social interactions 应译为 “异步社交互动”
考点 6:“traffic footprints” 应译为 “ 流量足迹”
考点 7:“inter-packet arrival times” 应译为 “分组间到达时间“
考点 8:“packet lengths” 应译为 ”数据包长度“
考点 9:“mobile devices” 应译为 “移动设备”
考点 10:“persistent network connection” 应译为 “持续网络连接“
考点 11:”traffic traces“ 应译为 ”流量轨迹 / 流量跟踪数据“
考点 12:“long-tail distribution” 应译为 “长尾分布”
考点 13:“game session time“ 应译为 “游戏会话时长“
考点 14:”Quality of Service (QoS)“ 应译为 ”服务质量“
考点 15:“packet traces” 应译为 “数据包跟踪数据”
考点 16:“Pokémon Go“ 应译为 “精灵宝可梦 Go“
考点 17:”round-trip time (RTT)“ 应译为 ”往返时延“
考点 18:"latency" 应译为 “时延 / 延迟”
考点 19:"cloud gaming" 应译为 “云游戏“
考点 20:"end-to-end latency" 应译为 "端到端时延"
考点 21:"computational offloading" 应译为 "计算卸载"
考点 22:"network traffic characteristics" 应译为 "网络流量特征"
|
11dca
|
学术论文
|
应用学科
|
81
|
翻译,中文翻译成英文。不要输出译文以外的内容。
以下是你本次的任务:
药品专利链接制度中的利益平衡与纠纷解决机制
一、专利链接的核心机制与运行冲突
药品专利链接制度(即 “药品上市审批与专利纠纷早期解决挂钩”)是平衡创新药企业与仿制药企业利益的关键制度。根据《药品专利纠纷早期解决机制实施办法》,仿制药申请人在提交上市申请时,需对参比制剂(即原研药)的专利状态作出声明,分为四类:
1. 专利无效声明(声明原研药专利全部无效)
2. 不侵权声明(声明仿制药技术方案未落入原研药专利保护范围)
3. 等待声明(声明在原研药专利到期后再上市)
4. 专利挑战声明(声明原研药专利应当被宣告无效或仿制药不侵权,并启动纠纷解决程序)
实践中,该机制的运行冲突集中在三方面:一是 “专利常青” 问题,原研药企业通过 “晶型专利 + 适应症专利” 的组合延长保护期(如某抗癌药核心专利到期后,企业通过新晶型专利再获 8 年保护),仿制药企业认为此举超出合理保护范围;二是 “首仿药市场独占期” 争夺,根据规定,首个成功挑战专利并获批的仿制药可获 12 个月市场独占期,但 2023 年某抗生素仿制药案中,两家企业同时挑战成功,引发 “谁应享有独占期” 的争议;三是 “审批周期与诉讼时效的衔接”,仿制药上市审批平均需 10 个月,而专利诉讼一审周期约 6 个月,可能出现 “药已上市但专利纠纷未决” 的情况,导致侵权风险。
二、利益平衡的实践困境
制度设计的初衷是 “激励创新与促进可及性”,但实践中面临三重矛盾:
1. 专利保护强度与药品可及性的冲突
创新药企业主张 “严格专利保护是研发动力”(某单抗药物研发成本超 10 亿美元,专利期内需收回成本),而患者组织则认为 “过长保护期推高药价”(某乙肝新药年治疗费 2.8 万元,专利到期后仿制药价格降至 3000 元)。2024 年某罕见病药案中,原研药专利保护期还有 5 年,但国内患者年死亡率达 30%,仿制药企业申请 “专利强制许可” 被驳回,引发 “生命权与专利权孰先” 的讨论。
2. 数据独占与试验数据依赖的矛盾
原研药企业享有 6 年临床试验数据独占期,仿制药企业需自行开展试验或寻求授权,但复杂制剂(如缓释微球)的试验数据难以重复,导致仿制药研发成本增加。某降糖药仿制药企业因无法获取原研药的药代动力学数据,试验周期延长 2 年,上市时间滞后于国际市场,被质疑 “数据独占变相延长专利保护”。
3. 跨境药品贸易中的平行进口争议
专利链接制度仅适用于境内上市药品,境外已上市但未在我国注册的原研药通过平行进口进入国内时,其专利状态不受该机制约束。2023 年某抗艾滋病药平行进口案中,境外低价药因未在我国登记专利,仿制药企业无法发起专利挑战,导致 “同药不同价”(进口价为国内仿制药的 1/3),冲击国内市场秩序。
三、纠纷解决路径的选择与局限
现行机制提供三类纠纷解决途径,但各有不足:
1. 行政裁决的效率优势与效力局限
国家药监局下设的专利纠纷早期解决机制办公室可在 90 日内作出行政裁决,效率高于诉讼,但裁决仅对当事人具有约束力,且不能直接宣告专利无效(需另行向专利局提出无效宣告请求)。2023 年某降压药案中,行政裁决认定 “仿制药不侵权”,但原研药企业不服并提起民事诉讼,导致同一纠纷重复处理。
2. 民事诉讼的终局性与周期问题
法院审理可直接对专利有效性作出认定(根据《专利法》第 76 条),但一审平均周期 8 个月,远超仿制药审批周期。某抗病毒药仿制药在诉讼期间获批上市,后法院认定侵权,企业被迫召回已销售药品,损失超 5000 万元。
3. 调解机制的自愿性与执行短板
行业协会组织的调解可快速达成和解(如 2024 年某抗生素案调解耗时 45 天),但和解协议需依赖双方自觉履行,缺乏强制执行力。某仿制药企业与原研药企业达成 “专利许可费 3%” 的调解协议后,仿制药上市后拒绝支付,原研药企业仍需通过诉讼维权。
四、制度完善的实践探索
针对上述问题,实践中已出现三类改进方向:
1. 专利信息登记的精细化
药监局建立 “药品专利信息登记平台”,要求原研药企业按 “活性成分、晶型、制剂、适应症” 分类登记专利,避免 “模糊登记”。某生物制剂企业因将 “通用技术特征” 登记为专利,被驳回登记请求,减少了 “专利流氓” 的投机空间。
2. 首仿药独占期的细化规则
明确 “同时挑战成功时,按首仿药申请日期排序”,2024 年某抗肿瘤仿制药案中,两家企业同日挑战成功,按 “技术先进性”(如生物利用度更高)判定其中一家享有独占期,减少了争议。
3. 跨境协作机制的试点
与东南亚国家建立 “药品专利数据互认”,某仿制药企业在泰国获批的生物等效性数据,在国内申报时被认可,缩短研发周期 6 个月,同时通过 “专利审查高速路”(PPH)加快跨境专利纠纷解决,2024 年某抗生素案通过 PPH 机制使中美专利审查周期同步,避免了重复诉讼。
|
药品专利链接制度中的利益平衡与纠纷解决机制
一、专利链接的核心机制与运行冲突
药品专利链接制度(即 “药品上市审批与专利纠纷早期解决挂钩”)是平衡创新药企业与仿制药企业利益的关键制度。根据《药品专利纠纷早期解决机制实施办法》,仿制药申请人在提交上市申请时,需对参比制剂(即原研药)的专利状态作出声明,分为四类:
1. 专利无效声明(声明原研药专利全部无效)
2. 不侵权声明(声明仿制药技术方案未落入原研药专利保护范围)
3. 等待声明(声明在原研药专利到期后再上市)
4. 专利挑战声明(声明原研药专利应当被宣告无效或仿制药不侵权,并启动纠纷解决程序)
实践中,该机制的运行冲突集中在三方面:一是 “专利常青” 问题,原研药企业通过 “晶型专利 + 适应症专利” 的组合延长保护期(如某抗癌药核心专利到期后,企业通过新晶型专利再获 8 年保护),仿制药企业认为此举超出合理保护范围;二是 “首仿药市场独占期” 争夺,根据规定,首个成功挑战专利并获批的仿制药可获 12 个月市场独占期,但 2023 年某抗生素仿制药案中,两家企业同时挑战成功,引发 “谁应享有独占期” 的争议;三是 “审批周期与诉讼时效的衔接”,仿制药上市审批平均需 10 个月,而专利诉讼一审周期约 6 个月,可能出现 “药已上市但专利纠纷未决” 的情况,导致侵权风险。
二、利益平衡的实践困境
制度设计的初衷是 “激励创新与促进可及性”,但实践中面临三重矛盾:
1. 专利保护强度与药品可及性的冲突
创新药企业主张 “严格专利保护是研发动力”(某单抗药物研发成本超 10 亿美元,专利期内需收回成本),而患者组织则认为 “过长保护期推高药价”(某乙肝新药年治疗费 2.8 万元,专利到期后仿制药价格降至 3000 元)。2024 年某罕见病药案中,原研药专利保护期还有 5 年,但国内患者年死亡率达 30%,仿制药企业申请 “专利强制许可” 被驳回,引发 “生命权与专利权孰先” 的讨论。
2. 数据独占与试验数据依赖的矛盾
原研药企业享有 6 年临床试验数据独占期,仿制药企业需自行开展试验或寻求授权,但复杂制剂(如缓释微球)的试验数据难以重复,导致仿制药研发成本增加。某降糖药仿制药企业因无法获取原研药的药代动力学数据,试验周期延长 2 年,上市时间滞后于国际市场,被质疑 “数据独占变相延长专利保护”。
3. 跨境药品贸易中的平行进口争议
专利链接制度仅适用于境内上市药品,境外已上市但未在我国注册的原研药通过平行进口进入国内时,其专利状态不受该机制约束。2023 年某抗艾滋病药平行进口案中,境外低价药因未在我国登记专利,仿制药企业无法发起专利挑战,导致 “同药不同价”(进口价为国内仿制药的 1/3),冲击国内市场秩序。
三、纠纷解决路径的选择与局限
现行机制提供三类纠纷解决途径,但各有不足:
1. 行政裁决的效率优势与效力局限
国家药监局下设的专利纠纷早期解决机制办公室可在 90 日内作出行政裁决,效率高于诉讼,但裁决仅对当事人具有约束力,且不能直接宣告专利无效(需另行向专利局提出无效宣告请求)。2023 年某降压药案中,行政裁决认定 “仿制药不侵权”,但原研药企业不服并提起民事诉讼,导致同一纠纷重复处理。
2. 民事诉讼的终局性与周期问题
法院审理可直接对专利有效性作出认定(根据《专利法》第 76 条),但一审平均周期 8 个月,远超仿制药审批周期。某抗病毒药仿制药在诉讼期间获批上市,后法院认定侵权,企业被迫召回已销售药品,损失超 5000 万元。
3. 调解机制的自愿性与执行短板
行业协会组织的调解可快速达成和解(如 2024 年某抗生素案调解耗时 45 天),但和解协议需依赖双方自觉履行,缺乏强制执行力。某仿制药企业与原研药企业达成 “专利许可费 3%” 的调解协议后,仿制药上市后拒绝支付,原研药企业仍需通过诉讼维权。
四、制度完善的实践探索
针对上述问题,实践中已出现三类改进方向:
1. 专利信息登记的精细化
药监局建立 “药品专利信息登记平台”,要求原研药企业按 “活性成分、晶型、制剂、适应症” 分类登记专利,避免 “模糊登记”。某生物制剂企业因将 “通用技术特征” 登记为专利,被驳回登记请求,减少了 “专利流氓” 的投机空间。
2. 首仿药独占期的细化规则
明确 “同时挑战成功时,按首仿药申请日期排序”,2024 年某抗肿瘤仿制药案中,两家企业同日挑战成功,按 “技术先进性”(如生物利用度更高)判定其中一家享有独占期,减少了争议。
3. 跨境协作机制的试点
与东南亚国家建立 “药品专利数据互认”,某仿制药企业在泰国获批的生物等效性数据,在国内申报时被认可,缩短研发周期 6 个月,同时通过 “专利审查高速路”(PPH)加快跨境专利纠纷解决,2024 年某抗生素案通过 PPH 机制使中美专利审查周期同步,避免了重复诉讼。
|
考点1:“参比制剂” 推荐译为 “reference listed drug (RLD)”
考点2:“首仿药市场独占期” 推荐译为 “first generic drug market exclusivity period”
考点3:“平行进口” 推荐译为 “parallel importation”
考点4:“晶型专利” 推荐译为 “crystal form patent”
考点5:“参比制剂” 必须译为 “reference listed drug (RLD)”,不能译为 “reference drug”
|
1271a
|
学术论文
|
社会科学
|
148
|
翻译,英文翻译成中文。不要输出译文以外的内容。
以下是你本次的任务:
For every spike in voltage there was a small but predictable increase in pleasure
With so much variety, it is telling when something remains constant. Try an experiment: lick your fingers as though you were about to turn a page. Instinctively, you’ve licked the spot where fingers grip light objects, and at its centre are the concentric ridges and grooves that define your fingerprint. If you move your finger over an object in most directions, the object will run roughly perpendicular to these ridges, allowing friction to tug on each ridge as though toppling a wall. This central, bulbous part of your fingertip also contains the finest, densest set of ridges. You can see this if you follow your finger a short distance toward your palm, where the ridges become progressively wider. It is no coincidence that the ridges are finest, most centred on the part of your finger that first makes contact with an object. It is also where the nerve endings that sense touch are most dense. If you’re the caressing sort, recall how you have touched a lover, your fingertips scanning as they glide slowly over skin. Perhaps your palm lay flat, presenting the largest possible surface for contact.
The ridges of our fingers and hands are densely innervated by sensory neurons, nerve cells that translate pressure into changes in voltage. These sensory neurons come in a variety of forms suited for their tasks, named after neuroscientists like Merkel, Ruffini, Meissner and Pacini. Nerve endings can be capped with structures called disks, capsules or corpuscles – each defined by a distinctive weight or stiffness. These tips make the neurons more or less sensitive to pressure. The nerve endings that sense touch can be buried deep in the skin or can be so near the surface you could find them within the ridge of a fingerprint.
When the pressure and depth of touch are just right, the surface of the sensing neuron is deformed, stretched until the tension opens channels that let electrically charged salt ions flow in and out of the cell. The voltage change caused by the flow of ions zips along a cable-like projection to the spinal cord, where it gets passed on to other nerve cells and eventually to the brain. We can judge how smooth or pliant something is because voltages conveying the complex patterns of pressure arrive quickly enough for our brains to perceive subtle variation in timing. Without this ability, touch would feel like a surveillance tape played at half-speed: blurred and coarse. Like other species, we gain this speed by insulating our cables. Nerve cells are highly specialised, and require companion cells to help them with the daily details of cellular living. Some of these companions have developed means of enveloping the cable-like projections of neurons, becoming flat and wrapping themselves around the exterior of the cable again and again, like a king-sized sheet swaddling an infant. Or like rubber coating wire.
Insulated neurons are responsible for fine touch, but there is a second class of receptors that remain bare. These bare nerve endings are slower, and respond to coarser kinds of stimuli. Science has long known that these unmyelinated neurons respond to temperature, pain, tickle and itch. But we have only recently learned that they also respond to the pleasurable sensation of caress. Researchers in Sweden recorded data from neurons in the skin of human subjects as they exposed them to soft slow touch. For every spike in voltage there was a small but predictable increase in pleasure. While these naked neurons are missing in the hairless skin of our fingers and palms, they are found on the rest of the body, on the places you might touch with affection or consolation. And naked fibres are particularly abundant in the places we like to juxtapose – our lips, nipples, genitals and anus. The clitoris and the glans are enmeshed in the unmyelinated ends of sensory neurons. Inexplicably, we have often assumed these naked fibres were there for the sensation of pain, as though we had never known the joy of sexual touch.
n the naked stream, touch can be warm or rapturous or full of hurt
Each touch receptor propagates voltages upward toward the spinal cord and brain, voltages that float like bottles bearing notes along a waterway defined by the spindly extensions of sensory neurons. Each current conveys its own kind of message, and the myriad currents coalesce into two north-bound streams.
Of these streams, the routes of discriminative touch are particularly well mapped. In the 1930s, Canadian neurosurgeon Wilder Penfield electrically stimulated the brains of epileptics, probing the cortex for the origin of seizures. Patients had to be awake for this procedure so that he could ask them what experiences were evoked by the faint electrical current. Electricity alone was enough to elicit the feeling of being touched on an arm, or, when delivered to a nearby region of cortex, the shoulder.
Penfield found that the brain contained precise maps of the body; he charted duplicate maps of both touch and movement, side by side, along adjacent folds of the cortex. The resulting ‘homunculus’ is an iconic image in neuroscience – a strange representation of the body whose distortions, like early maps of the world, reflect how we value the body’s surface. Those areas where touch is most sensitive are inflated. And three-dimensional reconstructions of these maps reveal a grotesque caricature of our evolutionary past. Our fingers, faces, palms, lips, tongues and genitals are all out-sized. The map of our brain’s control of movement is similarly distorted – our hands and mouths in particular are both exquisitely sensitive and extraordinarily precise. Play the piano or fellate a pianist and you will invoke our specialisations of sensation and motion to equal degrees.
Perhaps the most remarkable attribute of discriminative touch is that it reveals just how malleable our brains can be. The brains of patients born with syndactyly, in which two or more fingers are fused, represent that set of fingers as a single unit. Free the fingers and their cortical maps soon follow, new borders arising from their independence. Professional string musicians use the left hand for the precise fingering of an arpeggio or aria. With each note played glissando or staccato, with each shimmering or soulful vibrato, the left-handed cortices slowly swell.
If use inflates neural representations, disuse causes them to shrink, allowing neighbouring neurons to squat on the vacant real estate. Neurons that register facial touch lie adjacent to representations of our arms; amputees who lose an arm find that the brain’s face grows to take over the now idle regions of the map. Genital touch and the control of pelvic muscles lie side by side along a central nook of cortex, just below the cortical territories of feet. In one of the more provocative examples of neural plasticity, the neuroscientist V S Ramachandran at the University of California, San Diego, cites two amputees who, after losing a foot, seem to have gained genital sensitivity. One patient reported that his orgasm spanned from his genitals to his phantom foot.
A student of Ramachandran has gone on to suggest that such brain reorganisation contributed to the millennial prevalence of footbinding in medieval China. The brutal process, illegal since 1912, involved the bending and binding of a young girl’s foot, accomplished over years, until it was folded over like a billfold or, more generously, a lotus blossom. While the hobbling of women must have been a primary motive, Paul McGeoch, a clinician in San Diego, suggests that these women would also have experienced the atrophy of foot cortices and the encroachment of genital maps. English language scholarship from the 1960s cites texts that extol the virtues of footbinding. Some claim that it promoted vaginal tone, or that the foot became unusually sensitive to erotic touch. This literature seems somehow complicit with the practice and its misogyny – and yet it is consistent with our understanding of cortical plasticity.
The shifting landscape of discriminative touch reveals just how deeply we are shaped by our experiences. Our brains are sculpted by the accretion and erosion of their innumerable connections; the dendrites and spines of our neurons are altered by the information that flows through them. A friend and professional musician has worked his way across Europe, transcribing rare sheets of music written specifically for the viola, and sleeping in bathhouses along the way. At home, he keeps a map with pins in each country whose citizens he has sampled sexually. There are many pins. I imagine what his cortices must look like. Does he touch new skin with his left fingers? Do his lips tremble as he plays a passionate concerto? The ways in which we are changed by our paths through the world suggest an exquisite variety and specificity of experience.
|
For every spike in voltage there was a small but predictable increase in pleasure
With so much variety, it is telling when something remains constant. Try an experiment: lick your fingers as though you were about to turn a page. Instinctively, you’ve licked the spot where fingers grip light objects, and at its centre are the concentric ridges and grooves that define your fingerprint. If you move your finger over an object in most directions, the object will run roughly perpendicular to these ridges, allowing friction to tug on each ridge as though toppling a wall. This central, bulbous part of your fingertip also contains the finest, densest set of ridges. You can see this if you follow your finger a short distance toward your palm, where the ridges become progressively wider. It is no coincidence that the ridges are finest, most centred on the part of your finger that first makes contact with an object. It is also where the nerve endings that sense touch are most dense. If you’re the caressing sort, recall how you have touched a lover, your fingertips scanning as they glide slowly over skin. Perhaps your palm lay flat, presenting the largest possible surface for contact.
The ridges of our fingers and hands are densely innervated by sensory neurons, nerve cells that translate pressure into changes in voltage. These sensory neurons come in a variety of forms suited for their tasks, named after neuroscientists like Merkel, Ruffini, Meissner and Pacini. Nerve endings can be capped with structures called disks, capsules or corpuscles – each defined by a distinctive weight or stiffness. These tips make the neurons more or less sensitive to pressure. The nerve endings that sense touch can be buried deep in the skin or can be so near the surface you could find them within the ridge of a fingerprint.
When the pressure and depth of touch are just right, the surface of the sensing neuron is deformed, stretched until the tension opens channels that let electrically charged salt ions flow in and out of the cell. The voltage change caused by the flow of ions zips along a cable-like projection to the spinal cord, where it gets passed on to other nerve cells and eventually to the brain. We can judge how smooth or pliant something is because voltages conveying the complex patterns of pressure arrive quickly enough for our brains to perceive subtle variation in timing. Without this ability, touch would feel like a surveillance tape played at half-speed: blurred and coarse. Like other species, we gain this speed by insulating our cables. Nerve cells are highly specialised, and require companion cells to help them with the daily details of cellular living. Some of these companions have developed means of enveloping the cable-like projections of neurons, becoming flat and wrapping themselves around the exterior of the cable again and again, like a king-sized sheet swaddling an infant. Or like rubber coating wire.
Insulated neurons are responsible for fine touch, but there is a second class of receptors that remain bare. These bare nerve endings are slower, and respond to coarser kinds of stimuli. Science has long known that these unmyelinated neurons respond to temperature, pain, tickle and itch. But we have only recently learned that they also respond to the pleasurable sensation of caress. Researchers in Sweden recorded data from neurons in the skin of human subjects as they exposed them to soft slow touch. For every spike in voltage there was a small but predictable increase in pleasure. While these naked neurons are missing in the hairless skin of our fingers and palms, they are found on the rest of the body, on the places you might touch with affection or consolation. And naked fibres are particularly abundant in the places we like to juxtapose – our lips, nipples, genitals and anus. The clitoris and the glans are enmeshed in the unmyelinated ends of sensory neurons. Inexplicably, we have often assumed these naked fibres were there for the sensation of pain, as though we had never known the joy of sexual touch.
n the naked stream, touch can be warm or rapturous or full of hurt
Each touch receptor propagates voltages upward toward the spinal cord and brain, voltages that float like bottles bearing notes along a waterway defined by the spindly extensions of sensory neurons. Each current conveys its own kind of message, and the myriad currents coalesce into two north-bound streams.
Of these streams, the routes of discriminative touch are particularly well mapped. In the 1930s, Canadian neurosurgeon Wilder Penfield electrically stimulated the brains of epileptics, probing the cortex for the origin of seizures. Patients had to be awake for this procedure so that he could ask them what experiences were evoked by the faint electrical current. Electricity alone was enough to elicit the feeling of being touched on an arm, or, when delivered to a nearby region of cortex, the shoulder.
Penfield found that the brain contained precise maps of the body; he charted duplicate maps of both touch and movement, side by side, along adjacent folds of the cortex. The resulting ‘homunculus’ is an iconic image in neuroscience – a strange representation of the body whose distortions, like early maps of the world, reflect how we value the body’s surface. Those areas where touch is most sensitive are inflated. And three-dimensional reconstructions of these maps reveal a grotesque caricature of our evolutionary past. Our fingers, faces, palms, lips, tongues and genitals are all out-sized. The map of our brain’s control of movement is similarly distorted – our hands and mouths in particular are both exquisitely sensitive and extraordinarily precise. Play the piano or fellate a pianist and you will invoke our specialisations of sensation and motion to equal degrees.
Perhaps the most remarkable attribute of discriminative touch is that it reveals just how malleable our brains can be. The brains of patients born with syndactyly, in which two or more fingers are fused, represent that set of fingers as a single unit. Free the fingers and their cortical maps soon follow, new borders arising from their independence. Professional string musicians use the left hand for the precise fingering of an arpeggio or aria. With each note played glissando or staccato, with each shimmering or soulful vibrato, the left-handed cortices slowly swell.
If use inflates neural representations, disuse causes them to shrink, allowing neighbouring neurons to squat on the vacant real estate. Neurons that register facial touch lie adjacent to representations of our arms; amputees who lose an arm find that the brain’s face grows to take over the now idle regions of the map. Genital touch and the control of pelvic muscles lie side by side along a central nook of cortex, just below the cortical territories of feet. In one of the more provocative examples of neural plasticity, the neuroscientist V S Ramachandran at the University of California, San Diego, cites two amputees who, after losing a foot, seem to have gained genital sensitivity. One patient reported that his orgasm spanned from his genitals to his phantom foot.
A student of Ramachandran has gone on to suggest that such brain reorganisation contributed to the millennial prevalence of footbinding in medieval China. The brutal process, illegal since 1912, involved the bending and binding of a young girl’s foot, accomplished over years, until it was folded over like a billfold or, more generously, a lotus blossom. While the hobbling of women must have been a primary motive, Paul McGeoch, a clinician in San Diego, suggests that these women would also have experienced the atrophy of foot cortices and the encroachment of genital maps. English language scholarship from the 1960s cites texts that extol the virtues of footbinding. Some claim that it promoted vaginal tone, or that the foot became unusually sensitive to erotic touch. This literature seems somehow complicit with the practice and its misogyny – and yet it is consistent with our understanding of cortical plasticity.
The shifting landscape of discriminative touch reveals just how deeply we are shaped by our experiences. Our brains are sculpted by the accretion and erosion of their innumerable connections; the dendrites and spines of our neurons are altered by the information that flows through them. A friend and professional musician has worked his way across Europe, transcribing rare sheets of music written specifically for the viola, and sleeping in bathhouses along the way. At home, he keeps a map with pins in each country whose citizens he has sampled sexually. There are many pins. I imagine what his cortices must look like. Does he touch new skin with his left fingers? Do his lips tremble as he plays a passionate concerto? The ways in which we are changed by our paths through the world suggest an exquisite variety and specificity of experience.
|
考点1:“the dendrites and spines of our neurons ”推荐译为“神经元的树突和棘突”
考点2:“the places we like to juxtapose" 中的 "juxtapose"不可译为“并置”,推荐译为“让(身体部位)并列、贴近、接触”
考点3:"left-handed cortices"不可译为“左手皮层”在神经科学上是不准确的,可能会误导专业读者。应译为“(大脑中)负责左手的皮层区域”
考点4:"provocative" 推荐译为“引人深思的、启发性的”,不可译为“令人激动的”
考点5:"more generously" 推荐译为“说得好听一点”或“用一种更美化的方式说”,不可译为“更宽泛地说”
考点6:bathhouses推荐译为“澡堂”,不可译为“浴室”
|
14732
|
垂类场景
|
食品健康
|
191
|
翻译,英文翻译成中文。不要输出译文以外的内容。
以下是你本次的任务:
Spinal cord injury (SCI) is a complex and dynamic pathological condition characterized by disrupted lipid metabolism and neuroinflammatory responses, posing significant therapeutic challenges. To address these, biomimetic bacterial outer membrane nanoparticles (BM-NPs) are designed by integrating the precise targeting capability of detoxified outer membrane vesicles (dOMVs) with the efficient drug-loading properties of liposomes. BM-NPs exhibit superior targeting efficiency toward peripheral neutrophils and macrophages, enabling spatiotemporal drug delivery via immune cells.
An innovative “Tortoise and Hare” dynamic adaptive delivery strategy is introduced, where neutrophils facilitate rapid drug transport during the acute phase of SCI, while macrophages ensure sustained delivery during the subacute phase. This strategy aligns with the dynamic pathological progression of SCI, offering precision targeting tailored to different stages of injury. BM-NPs demonstrate multifaceted therapeutic effects, including the suppression of foam cell formation through coordinated enhancement of lipid droplet autophagy and cholesterol efflux. Furthermore, they modulate the inflammatory microenvironment, preserve myelin integrity, and significantly promote neural functional recovery post-SCI. By overcoming the limitations of conventional delivery systems in targeting and timeliness, BM-NPs offer an innovative, highly efficient, and clinically translatable platform for SCI treatment and other acute inflammatory disorders of the central nervous system.
1. Introduction
The pathological features of spinal cord injury (SCI) encompass inflammatory responses, foam cell formation, myelin degradation, and neurological dysfunction. Among these, resident microglia and peripheral macrophages in the spinal cord play dual roles: they participate in the clearance of myelin debris while serving as pivotal mediators in the inflammatory response. However, their phagocytic activity can inadvertently lead to foam cell formation, exacerbating lipid metabolism dysregulation and inflammatory cascades, thereby posing a major barrier to effective treatment. Moreover, the presence of the blood-spinal cord barrier (BSCB) significantly hampers drug delivery efficiency, further limiting the therapeutic outcomes. Although these nanotechnologies have improved drug targeting and therapeutic efficacy to some extent, they lack active cellular functionality, which limits their timeliness and delivery efficiency in the complex pathological environment of SCI. Conversely, cell-based drug delivery systems present a promising alternative. Leveraging the innate chemotactic abilities of intact cells, these systems enable precise and dynamic drug delivery.[7] In our previous study, we employed engineered macrophages for drug delivery in the treatment of SCI and revealed a significantly improved functional recovery in a rat SCI model. However, we also observed that blood-derived macrophages only began to appear at the injury site at ≈3 days post-SCI. This indicates the existence of “treatment void” period before macrophages reach the injury site, where no effective therapeutic intervention occurs. Therefore, it is critical to develop more timely and efficient therapeutic strategies to address this unmet need.
To address the aforementioned challenges, in this study, we developed biomimetic bacterial outer membrane nanoparticles (BM-NPs) derived from detoxified outer membrane vesicles (dOMVs) of msbB-deficient Escherichia coli. By knocking out the msbB gene, the toxicity of lipopolysaccharide (LPS) in dOMVs was significantly reduced while retaining the functional outer membrane protein A (OmpA), enabling efficient immune cell targeting. Through fusion with drug-loaded liposomes, BMNPs not only exhibited excellent targeting specificity and
drugloading capacity but also achieved dual-drug synergistic regulation: rapamycin (Rapa) induced autophagy to promote lipid droplet degradation, while LXR-623 enhanced the free cholesterol efflux. This synergistic effect effectively inhibited foam cell formation and improved the inflammatory microenvironment. Instead of simply decorating liposomes with isolated OmpA proteins, we adopted a fusion strategy with dOMVs to construct BMNPs. This approach is expected to better preserve the native conformation and biological functionality of outer membrane proteins within a physiological lipid environment. In contrast, direct anchoring of membrane proteins onto synthetic liposomes may increase the risk of conformational instability, functional loss, and batch-to-batch variability. Therefore, the biomimetic fusion-based design may offer improved nanoparticle stability, immune cell targeting, and therapeutic potential for SCI treatment.
BM-NPs exhibited precise spatiotemporal targeting capabilities through a novel “Tortoise and Hare” collaborative delivery strategy. This approach emphasizes the complementary roles of neutrophils and macrophages in stage-specific drug delivery during SCI treatment. In the acute phase, neutrophils rapidly recognize and internalize BM-NPs via their innate chemotactic properties, acting as “Trojan horse” carriers preferentially recruited to the injury site. These neutrophils, akin to the swift hare, promptly release neutrophil extracellular traps (NETs) upon stimulation from the injury microenvironment, triggering the early release of drug-loaded NPs to regulate microglial function.
Approximately 3 days post-injury, monocyte-derived macrophages carrying BM-NPs gradually infiltrate the injury site, functioning as the enduring tortoise. These macrophages provide sustained drug delivery, thus extending the therapeutic effects. This collaborative delivery strategy enables BM-NPs to achieve stage-specific regulation across different pathological phases of SCI, effectively inhibiting foam cell formation, ameliorating the inflammator microenvironment, and ultimately promoting significant neurological recovery.
In this study, we propose an immune cell-mediated dynamic adaptive delivery strategy that precisely aligns with the pathological stages of SCI, offering a novel and promising solution for SCI treatment.
2. Results and Discussion
2.1. Exacerbation of Foam Cell Formation Following SCI
During SCI, acute mechanical compression and subsequent inflammatory stimuli lead to significant demyelination. The resident microglia and blood-derived macrophages are the primary phagocytes responsible for clearing myelin debris in the spinal cord. Due to similarities in morphology, gene expression, and surface protein markers, their specific roles have often been conflated in early studies. However, these cells exhibit distinct differences in activation timing, spatial distribution, and myelin debris clearance capacity. Following injury, the resident microglia are the first responders, rapidly initiating the phagocytosis of tissue debris. In contrast, blood-derived macrophages are typically recruited to the injury site on approximately day 3 and gradually assume the primary role in clearing the myelin debris. While transient myelin uptake directs these cells toward resolving disease phenotypes, persistent intracellular myelin accumulation induces foam cell formation. To investigate the temporal dynamics of foam cell development post-SCI, we analyzed a single-cell RNA sequencing dataset from Li et al. Cell type annotation based on specific markers identified distinct populations of microglia and macrophages (Figure 1A,B). Temporal expression dynamics of foam cell-associated genes revealed a significant upregulation in both microglia and macrophages at various post-SCI time points (Figure 1C).
Immunofluorescence staining of spinal cord tissues pre- and post-injury revealed co-localization of the microglialmarker Iba-1 with the myelin basic protein (MBP) as early as day 1 and 3 post SCI (Figure 1D). This finding underscores the role of microglia, the resident immune cells ofthe central nervous system, as “pioneers” during the early stages of SCI. Microglia rapidly transition to a reactive phenotype and actively participate in myelin debris clearance. By day 7, substantial myelin debris was detected within the CD68+ cells (marking the activated microglia and macrophages), and this phenomenon persisted until day 28, reflecting the prolonged burden of phagocytes in clearing debris post-SCI (Figure 1D). Further analysis of lipid accumulation using BODIPY fluorescence staining revealed the formation of lipid droplets as early as day 3 post-injury. This accumulation intensified progressively on days 7 and 28 (Figure 1E,F).
2.2. Autophagy and Cholesterol Efflux in Foam Cell Formation
The dynamic changes in foam cell-associated gene expression and lipid droplet accumulation in the injury site post-SCI highlight the necessity for early and effective therapeutic intervention. In recent years, metabolic reprogramming has emerged as a promising strategy to modulate the cellular metabolic states, offering new perspectives on foam cell formation and inflammation regulation. The complex environment of the injury site is characterized by high levels of reactive oxygen species (ROS), inflammatory cytokines, and myelin debris. Therefore, there is a pressing need for a therapeutic approach that not only inhibits lipid accumulation in foam cells but also promotes their reparative phenotype. To identify effective strategies for mitigating foam cell formation, we investigated the impact of synergistic regulation of macrophage metabolism on lipid accumulation in foam cells.
Enhancing cholesterol efflux is widely recognized as an effective intervention strategy. This process primarily relies on the function of the ATP-binding cassette transporter A1 (ABCA1), which facilitates the transport of free cholesterol to the extracellular space, thereby reducing the intracellular cholesterol burden. However, the initial step of cholesterol efflux involves the release of cholesterol from lipid droplets. However, the initial step of cholesterol efflux involves the release of cholesterol from lipid droplets. Consequently, promoting the efficient hydrolysis of cholesterol esters in lipid droplets is critical for achieving effective cholesterol efflux. Lipophagy, the autophagy-mediated degradation of lipid droplets, has been proven pivotal for lipid droplet processing within cells. Based on this, we explored the synergistic regulation of foam cell formation using the autophagy inducer Rapa and the liver X receptor agonist LXR-623.
The cytotoxicity of the drugs was assessed first (Figure S1A,B, Supporting Information). Subsequently, a foam cell model was established in vitro using purified myelin debris, and the intracellular total cholesterol content was measured post-treatment to optimize the drug combination ratio (Figure S1C, Supporting Information). At a Rapa: LXR-623 ratio of 1:5 (nmol/nmol, with Rapa concentration set as a reference at 40 nmol mL−1), the intracellular cholesterol levels were significantly reduced. Although a further reduction in the cholesterol content was observed at a ratio of1:8, the encapsulation of such a high drug ratio within the NPs posed technical challenges. Therefore, a ratio of 1:5 was selected. To evaluate whether the combination therapy could synergistically inhibit foam cell formation, macrophages were stimulated with myelin debris (0.5 mg mL−1) for 72 h to establish a foam cell model. Western blot analysis revealed that, compared to the PBS group, the combination treatment significantly increased the LC3B-II/LC3B-I ratio and downregulated the expression of the autophagy-inhibitory protein P62 (Figure S2A,B, Supporting Information). Compared to the PBS group, immunofluorescence analysis further indicated that the combination treatment notably increased the number ofLC3B-positive puncta (Figure S3A,C, Supporting Information), while reducing the number ofP62-positive puncta (Figure S3B,D, Supporting Information).
Such a finding suggests that the combination treatment restored the autophagic flux in foam cells. Additionally, fluorescence imaging demonstrated that the combination treatment significantly reduced the BODIPY fluorescence intensity in cells (Figure S3A,E, Supporting Information), indicating a reduction in the intracellular lipid burden. The treatment also markedly upregulated the expression of LXR𝛼 andABCA1,suggestingan enhanced cholesterol efflux capacity. Quantitative analysis of the intracellular total cholesterol and extracellular free cholesterol in the culture supernatant (Figure S4A,B, Supporting Information) showed that the PBS group exhibited significantly elevated intracellular cholesterol levels and reduced extracellular free cholesterol. Conversely, the combination treatment group exhibited significantly reduced intracellular cholesterol levels with increased extracellular free cholesterol levels, thereby significantly improving the cholesterol efflux efficiency (Figure S4C, Supporting Information). These findings demonstrate that promoting autophagy and cholesterol efflux synergistically inhibits foam cell formation.
|
Spinal cord injury (SCI) is a complex and dynamic pathological condition characterized by disrupted lipid metabolism and neuroinflammatory responses, posing significant therapeutic challenges. To address these, biomimetic bacterial outer membrane nanoparticles (BM-NPs) are designed by integrating the precise targeting capability of detoxified outer membrane vesicles (dOMVs) with the efficient drug-loading properties of liposomes. BM-NPs exhibit superior targeting efficiency toward peripheral neutrophils and macrophages, enabling spatiotemporal drug delivery via immune cells.
An innovative “Tortoise and Hare” dynamic adaptive delivery strategy is introduced, where neutrophils facilitate rapid drug transport during the acute phase of SCI, while macrophages ensure sustained delivery during the subacute phase. This strategy aligns with the dynamic pathological progression of SCI, offering precision targeting tailored to different stages of injury. BM-NPs demonstrate multifaceted therapeutic effects, including the suppression of foam cell formation through coordinated enhancement of lipid droplet autophagy and cholesterol efflux. Furthermore, they modulate the inflammatory microenvironment, preserve myelin integrity, and significantly promote neural functional recovery post-SCI. By overcoming the limitations of conventional delivery systems in targeting and timeliness, BM-NPs offer an innovative, highly efficient, and clinically translatable platform for SCI treatment and other acute inflammatory disorders of the central nervous system.
1. Introduction
The pathological features of spinal cord injury (SCI) encompass inflammatory responses, foam cell formation, myelin degradation, and neurological dysfunction. Among these, resident microglia and peripheral macrophages in the spinal cord play dual roles: they participate in the clearance of myelin debris while serving as pivotal mediators in the inflammatory response. However, their phagocytic activity can inadvertently lead to foam cell formation, exacerbating lipid metabolism dysregulation and inflammatory cascades, thereby posing a major barrier to effective treatment. Moreover, the presence of the blood-spinal cord barrier (BSCB) significantly hampers drug delivery efficiency, further limiting the therapeutic outcomes. Although these nanotechnologies have improved drug targeting and therapeutic efficacy to some extent, they lack active cellular functionality, which limits their timeliness and delivery efficiency in the complex pathological environment of SCI. Conversely, cell-based drug delivery systems present a promising alternative. Leveraging the innate chemotactic abilities of intact cells, these systems enable precise and dynamic drug delivery.[7] In our previous study, we employed engineered macrophages for drug delivery in the treatment of SCI and revealed a significantly improved functional recovery in a rat SCI model. However, we also observed that blood-derived macrophages only began to appear at the injury site at ≈3 days post-SCI. This indicates the existence of “treatment void” period before macrophages reach the injury site, where no effective therapeutic intervention occurs. Therefore, it is critical to develop more timely and efficient therapeutic strategies to address this unmet need.
To address the aforementioned challenges, in this study, we developed biomimetic bacterial outer membrane nanoparticles (BM-NPs) derived from detoxified outer membrane vesicles (dOMVs) of msbB-deficient Escherichia coli. By knocking out the msbB gene, the toxicity of lipopolysaccharide (LPS) in dOMVs was significantly reduced while retaining the functional outer membrane protein A (OmpA), enabling efficient immune cell targeting. Through fusion with drug-loaded liposomes, BMNPs not only exhibited excellent targeting specificity and
drugloading capacity but also achieved dual-drug synergistic regulation: rapamycin (Rapa) induced autophagy to promote lipid droplet degradation, while LXR-623 enhanced the free cholesterol efflux. This synergistic effect effectively inhibited foam cell formation and improved the inflammatory microenvironment. Instead of simply decorating liposomes with isolated OmpA proteins, we adopted a fusion strategy with dOMVs to construct BMNPs. This approach is expected to better preserve the native conformation and biological functionality of outer membrane proteins within a physiological lipid environment. In contrast, direct anchoring of membrane proteins onto synthetic liposomes may increase the risk of conformational instability, functional loss, and batch-to-batch variability. Therefore, the biomimetic fusion-based design may offer improved nanoparticle stability, immune cell targeting, and therapeutic potential for SCI treatment.
BM-NPs exhibited precise spatiotemporal targeting capabilities through a novel “Tortoise and Hare” collaborative delivery strategy. This approach emphasizes the complementary roles of neutrophils and macrophages in stage-specific drug delivery during SCI treatment. In the acute phase, neutrophils rapidly recognize and internalize BM-NPs via their innate chemotactic properties, acting as “Trojan horse” carriers preferentially recruited to the injury site. These neutrophils, akin to the swift hare, promptly release neutrophil extracellular traps (NETs) upon stimulation from the injury microenvironment, triggering the early release of drug-loaded NPs to regulate microglial function.
Approximately 3 days post-injury, monocyte-derived macrophages carrying BM-NPs gradually infiltrate the injury site, functioning as the enduring tortoise. These macrophages provide sustained drug delivery, thus extending the therapeutic effects. This collaborative delivery strategy enables BM-NPs to achieve stage-specific regulation across different pathological phases of SCI, effectively inhibiting foam cell formation, ameliorating the inflammator microenvironment, and ultimately promoting significant neurological recovery.
In this study, we propose an immune cell-mediated dynamic adaptive delivery strategy that precisely aligns with the pathological stages of SCI, offering a novel and promising solution for SCI treatment.
2. Results and Discussion
2.1. Exacerbation of Foam Cell Formation Following SCI
During SCI, acute mechanical compression and subsequent inflammatory stimuli lead to significant demyelination. The resident microglia and blood-derived macrophages are the primary phagocytes responsible for clearing myelin debris in the spinal cord. Due to similarities in morphology, gene expression, and surface protein markers, their specific roles have often been conflated in early studies. However, these cells exhibit distinct differences in activation timing, spatial distribution, and myelin debris clearance capacity. Following injury, the resident microglia are the first responders, rapidly initiating the phagocytosis of tissue debris. In contrast, blood-derived macrophages are typically recruited to the injury site on approximately day 3 and gradually assume the primary role in clearing the myelin debris. While transient myelin uptake directs these cells toward resolving disease phenotypes, persistent intracellular myelin accumulation induces foam cell formation. To investigate the temporal dynamics of foam cell development post-SCI, we analyzed a single-cell RNA sequencing dataset from Li et al. Cell type annotation based on specific markers identified distinct populations of microglia and macrophages (Figure 1A,B). Temporal expression dynamics of foam cell-associated genes revealed a significant upregulation in both microglia and macrophages at various post-SCI time points (Figure 1C).
Immunofluorescence staining of spinal cord tissues pre- and post-injury revealed co-localization of the microglialmarker Iba-1 with the myelin basic protein (MBP) as early as day 1 and 3 post SCI (Figure 1D). This finding underscores the role of microglia, the resident immune cells ofthe central nervous system, as “pioneers” during the early stages of SCI. Microglia rapidly transition to a reactive phenotype and actively participate in myelin debris clearance. By day 7, substantial myelin debris was detected within the CD68+ cells (marking the activated microglia and macrophages), and this phenomenon persisted until day 28, reflecting the prolonged burden of phagocytes in clearing debris post-SCI (Figure 1D). Further analysis of lipid accumulation using BODIPY fluorescence staining revealed the formation of lipid droplets as early as day 3 post-injury. This accumulation intensified progressively on days 7 and 28 (Figure 1E,F).
2.2. Autophagy and Cholesterol Efflux in Foam Cell Formation
The dynamic changes in foam cell-associated gene expression and lipid droplet accumulation in the injury site post-SCI highlight the necessity for early and effective therapeutic intervention. In recent years, metabolic reprogramming has emerged as a promising strategy to modulate the cellular metabolic states, offering new perspectives on foam cell formation and inflammation regulation. The complex environment of the injury site is characterized by high levels of reactive oxygen species (ROS), inflammatory cytokines, and myelin debris. Therefore, there is a pressing need for a therapeutic approach that not only inhibits lipid accumulation in foam cells but also promotes their reparative phenotype. To identify effective strategies for mitigating foam cell formation, we investigated the impact of synergistic regulation of macrophage metabolism on lipid accumulation in foam cells.
Enhancing cholesterol efflux is widely recognized as an effective intervention strategy. This process primarily relies on the function of the ATP-binding cassette transporter A1 (ABCA1), which facilitates the transport of free cholesterol to the extracellular space, thereby reducing the intracellular cholesterol burden. However, the initial step of cholesterol efflux involves the release of cholesterol from lipid droplets. However, the initial step of cholesterol efflux involves the release of cholesterol from lipid droplets. Consequently, promoting the efficient hydrolysis of cholesterol esters in lipid droplets is critical for achieving effective cholesterol efflux. Lipophagy, the autophagy-mediated degradation of lipid droplets, has been proven pivotal for lipid droplet processing within cells. Based on this, we explored the synergistic regulation of foam cell formation using the autophagy inducer Rapa and the liver X receptor agonist LXR-623.
The cytotoxicity of the drugs was assessed first (Figure S1A,B, Supporting Information). Subsequently, a foam cell model was established in vitro using purified myelin debris, and the intracellular total cholesterol content was measured post-treatment to optimize the drug combination ratio (Figure S1C, Supporting Information). At a Rapa: LXR-623 ratio of 1:5 (nmol/nmol, with Rapa concentration set as a reference at 40 nmol mL−1), the intracellular cholesterol levels were significantly reduced. Although a further reduction in the cholesterol content was observed at a ratio of1:8, the encapsulation of such a high drug ratio within the NPs posed technical challenges. Therefore, a ratio of 1:5 was selected. To evaluate whether the combination therapy could synergistically inhibit foam cell formation, macrophages were stimulated with myelin debris (0.5 mg mL−1) for 72 h to establish a foam cell model. Western blot analysis revealed that, compared to the PBS group, the combination treatment significantly increased the LC3B-II/LC3B-I ratio and downregulated the expression of the autophagy-inhibitory protein P62 (Figure S2A,B, Supporting Information). Compared to the PBS group, immunofluorescence analysis further indicated that the combination treatment notably increased the number ofLC3B-positive puncta (Figure S3A,C, Supporting Information), while reducing the number ofP62-positive puncta (Figure S3B,D, Supporting Information).
Such a finding suggests that the combination treatment restored the autophagic flux in foam cells. Additionally, fluorescence imaging demonstrated that the combination treatment significantly reduced the BODIPY fluorescence intensity in cells (Figure S3A,E, Supporting Information), indicating a reduction in the intracellular lipid burden. The treatment also markedly upregulated the expression of LXR𝛼 andABCA1,suggestingan enhanced cholesterol efflux capacity. Quantitative analysis of the intracellular total cholesterol and extracellular free cholesterol in the culture supernatant (Figure S4A,B, Supporting Information) showed that the PBS group exhibited significantly elevated intracellular cholesterol levels and reduced extracellular free cholesterol. Conversely, the combination treatment group exhibited significantly reduced intracellular cholesterol levels with increased extracellular free cholesterol levels, thereby significantly improving the cholesterol efflux efficiency (Figure S4C, Supporting Information). These findings demonstrate that promoting autophagy and cholesterol efflux synergistically inhibits foam cell formation.
|
考点 1:"biomimetic bacterial outer membrane nanoparticles (BM-NPs)" 只能译为 "仿生细菌外膜纳米颗粒 (BM-NPs)",因为这是已经研究出来的医学成果,期刊以及新闻上都采取此翻译
考点2:"spatiotemporal drug delivery" 推荐译为 "时空药物递送"
考点 3:"detoxified outer membrane vesicles (dOMVs)" 必须译为 "低毒细菌外膜囊泡 (dOMVs)",因为这是已经研究出来的医学成果,期刊以及新闻上都采取此翻译
考点 4:"msbB-deficient Escherichia coli" 应该译为 "msbB基因缺陷型大肠杆菌"
|
1493a
|
学术论文
|
自然科学
|
156
|
翻译:英文翻译成为中文。不要输出译文以外的内容。以下是你本次的任务:The failure of consciousness to logically supervene on the physical tells us that no reductive explanation of consciousness can succeed. Given any account of the physical processes purported to underlie consciousness, there will always be a further question: Why are these processes accompanied by conscious experience? For most other phenomena, such a question is easily answered: the physical facts about those processes entail the existence of the phenomena. For a phenomenon such as life, for example, the physical facts imply that certain functions will be performed, and the performance of those functions is all we need to explain in order to explain life. But no such answer will suffice for consciousness. Physical explanation is well suited to the explanation of structure and of function. Structural properties and functional properties can be straightforwardly entailed by a low-level physical story, and so are clearly apt for reductive explanation. And almost all the high-level phenomena that we need to explain ultimately come down to structure or function: think of the explanation of waterfalls, planets, digestion, reproduction, language. But the explanation of consciousness is not just a matter of explaining structure and function. Once we have explained all the physical structure in the vicinity of the brain, and we have explained how all the various brain functions are performed, there is a further sort of explanandum: consciousness itself. Why should all this structure and function give rise to experience? The story about the physical processes does not say. We can put this in terms of the thought experiments given earlier. Any story about physical processes applies equally to me and to my zombie twin. It follows that nothing in that story says why, in my case, consciousness arises. Similarly, any story about physical processes applies equally to my inverted twin, who sees blue where I see red: it follows that nothing in that story says why my experience is of one variety rather than another. The very fact that it is logically possible that the physical facts could be the same while the facts about consciousness are different shows us that as Levine (1983) has put it, there is an explanatory gap between the physical level and conscious experience. If this is right, the fact that consciousness accompanies a given physical process is a further fact, not explainable simply by telling the story about the physical facts. In a sense, the accompaniment must be taken as brute. We might try to systematize and explain these brute facts in terms of some simple underlying pattern, but there will always remain an element here that is logically independent of the physical story. Perhaps we might get some kind of explanation by combining the underlying physical facts with certain further bridging principles that link the physical facts with consciousness, but this explanation will not be a reductive one. The very need for explicit bridging principles shows us that consciousness is not being explained reductively, but is being explained on its own terms. Of course nothing I have said implies that physical facts are irrelevant to the explanation of consciousness. We can still expect physical accounts to play a significant role in a theory of consciousness, giving information about the physical basis of consciousness, for example, and perhaps yielding a detailed correspondence between various aspects of physical processing and aspects of conscious experience. Such accounts may be especially useful in helping to understand the structure of consciousness: the patterns of similarity and difference between experiences, the geometric structure of phenomenal fields, and so on. I say much more about these and other things that physical explanation can tell us about experience in a nonreductive frame- work in Chapter 6. But a physical account, alone, is not enough. At this point, a number of objections naturally arise.Objection 1: Are we setting the standards too high? Some might argue that explanation of any high-level phenomena will postulate "bridge laws" in addition to a low-level account, and that it is only with the aid of these bridge laws that the details of the high-level phenomena are derived. However, as the discussion in the last chapter suggests (and as is carefully argued by Horgan [1978]), in such cases the bridge laws are not further facts about the world. Rather, the connecting principles themselves are logically supervenient on the low-level facts. The extreme case of such a bridging principle is a supervenience conditional, which we have seen is usually a conceptual truth. Other more "localized" bridging principles, such as the link between molecular motion and heat, can at least be derived from the physical facts. For consciousness, by contrast, such bridging principles must be taken as primitive. It is interesting to see how a typical high-level property—such as life, say—evades the arguments put forward in the case of consciousness. First, it is straightforwardly inconceivable that there could be a physical replica of a living creature that was not itself alive. Perhaps a problem might arise due to context-dependent properties (would a replica that forms randomly in a swamp be alive, or be human?), but fixing environmental facts eliminates even that possibility. Second, there is no "inverted life" possibility analogous to the inverted spectrum. Third, when one knows all the physical facts about an organism (and possibly about its environment), one has enough material to know all the biological facts. Fourth, there is no epistemic asymmetry with life; facts about life in others are as accessible, in principle, as facts about life in ourselves. Fifth, the concept of life is plausibly analyzable in functional terms: to be alive is roughly to possess certain capacities to adapt, reproduce, and metabolize. As a general point, most high-level phenomena come down to matters of physical structure and function, and we have good reason to believe that structural and functional properties are logically supervenient on the physical. Objection 2: Couldn't a vitalist have said the same thing about life? All this notwithstanding, a common reaction to the sort of argument I have given is to reply that a vitalist about life might have said the same things. For example, a vitalist might have claimed that it is logically possible that a physical replica of me might not be alive, in order to establish that life cannot be reductively explained. And a vitalist might have argued that life is a further fact, not explained by any account of the physical facts. But the vitalist would have been wrong. By analogy, might not the opponent of reductive explanation for consciousness also be wrong? I think this reaction misplaces the source of vitalist objections. Vitalism was mostly driven by doubt about whether physical mechanisms could perform all the complex functions associated with life: adaptive behavior, reproduction, and the like. At the time, very little was known about the enormous sophistication of biochemical mechanisms, so this sort of doubt was quite natural. But implicit in these very doubts is the conceptual point that when it comes to explaining life, it is the performance of various functions that needs to be explained. Indeed, it is notable that as physical explanation of the relevant functions gradually appeared, vitalist doubts mostly melted away. With consciousness, by contrast, the problem persists even when the various functions are explained. Presented with a full physical account showing how physical processes perform the relevant functions, a reasonable vitalist would concede that life has been explained. There is not even conceptual room for the performance of these functions without life. Perhaps some ultrastrong vitalist would deny even this, claiming that something is left out by a functional account of life—the vital spirit, perhaps. But the obvious rejoinder is that unlike experience, the vital spirit is not something we have independent reason to believe in. Insofar as there was ever any reason to believe in it, it was as an explanatory construct—"We must have such a thing in order to be able to do such amazing stuff." But as an explanatory construct, the vital spirit can be eliminated when we find a better explanation of how the functions are performed. Conscious experience, by contrast, forces itself on one as an explanandum and cannot be eliminated so easily. One reason a vitalist might think something is left out of a functional explanation of life is precisely that nothing in a physical account explains why there is something it is like to be alive. Perhaps some element of belief in a "vital spirit" was tied to the phenomena of one's inner life. Many have perceived a link between the concepts of life and experience, and even today it seems reasonable to say that one of the things that needs to be explained about life is the fact that many living creatures are conscious. But the existence of this sort of vitalist doubt is of no comfort to the proponent of reductive explanation of consciousness, as it is a doubt that has never been overturned. Objection 3: Is conceivability a guide to possibility? Philosophers are often suspicious of arguments that give a key role to conceivability, frequently responding that conceivability does not suffice for possibility. This is a subtle issue that I have discussed earlier and will dis cuss again: but here, the subtleties are not especially relevant. When it comes to matters of explanation, it is clear that conceivability is central. If on reflection we find it conceivable that all these physical processes could take place in the absence of consciousness, then no reductive explanation of consciousness will be satisfactory: the further question of why we exist and not zombies will always arise. Even if conceivability is tied to the limits of human capacity, explanation is tied to the limits of human capacity in a similar way. Another way to put the point is to note that reductive explanation of a phenomenon in terms of the physical requires an a priori implication from the physical facts to the relevant high-level facts (logical supervenience according to primary intension, as I put it earlier). If such a connection does not hold, then we will always be able to raise the further question of why the physical processes give rise to consciousness. We have seen that in almost all domains, the right sort of connection holds, making reductive explanation possible; but it does not seem to hold for conscious experience. One can question whether ontological views such as materialism turn on these a priori links—I discuss that matter in the next chapter—but when it comes to reductive explanation, such links are crucial. Objection 4: Isn't this a collection of circular intuitions? It might be further objected that the arguments I have given consist, at bottom, in a collection of intuitions. There is certainly a sense in which all these arguments are based on intuition, but I have tried to make clear just how natural and plain these intuitions are, and how forced it is to deny them. The main intuition at work is that there is something to be explained—some phenomenon associated with first-person experience that presents a problem not presented by observation of cognition from the third-person point of view. Given the premise that some explanandum is forced on us by first- person experience that is not forced on us by third-person observation, most of the arguments above fall out. It follows immediately, for example, that what needs to be explained cannot be analyzed as the playing of some functional role, for the latter phenomenon is revealed to us by third-person observation and is much more straightforward. The "intuition" at work here is the very raison d'etre of the problem of consciousness. The only consistent way to get around the intuitions is to deny the problem and the phenomenon altogether. One can always, at least when speaking "philosophically," deny the intuitions altogether, and deny that there is anything (apart from the performance of various functions) that needs explaining. But if one takes consciousness seriously, the conclusions for which I am arguing must follow.Objection 5: Doesn't all explanation have to stop somewhere? A final objection is that no explanation gives one something for nothing: all explanation has to stop somewhere. In explaining the motion of the planets, for example, one takes the laws of gravity and the existence of mass for granted. Perhaps we should simply take something for granted in this case, too? I am sympathetic with this point; I think we do have to take something for granted in explaining consciousness. But in doing so we inevitably move beyond a reductive explanation. Indeed, this sort of analogy lends support to the nonreductive position I am advocating. We take the laws of physics for granted because they are fundamental laws. If we take a link between physical processes and conscious experience for granted, this suggests that the link should be taken as fundamental in the same way. I return to this point in the next chapter.
|
The failure of consciousness to logically supervene on the physical tells us that no reductive explanation of consciousness can succeed. Given any account of the physical processes purported to underlie consciousness, there will always be a further question: Why are these processes accompanied by conscious experience? For most other phenomena, such a question is easily answered: the physical facts about those processes entail the existence of the phenomena. For a phenomenon such as life, for example, the physical facts imply that certain functions will be performed, and the performance of those functions is all we need to explain in order to explain life. But no such answer will suffice for consciousness. Physical explanation is well suited to the explanation of structure and of function. Structural properties and functional properties can be straightforwardly entailed by a low-level physical story, and so are clearly apt for reductive explanation. And almost all the high-level phenomena that we need to explain ultimately come down to structure or function: think of the explanation of waterfalls, planets, digestion, reproduction, language. But the explanation of consciousness is not just a matter of explaining structure and function. Once we have explained all the physical structure in the vicinity of the brain, and we have explained how all the various brain functions are performed, there is a further sort of explanandum: consciousness itself. Why should all this structure and function give rise to experience? The story about the physical processes does not say. We can put this in terms of the thought experiments given earlier. Any story about physical processes applies equally to me and to my zombie twin. It follows that nothing in that story says why, in my case, consciousness arises. Similarly, any story about physical processes applies equally to my inverted twin, who sees blue where I see red: it follows that nothing in that story says why my experience is of one variety rather than another. The very fact that it is logically possible that the physical facts could be the same while the facts about consciousness are different shows us that as Levine (1983) has put it, there is an explanatory gap between the physical level and conscious experience. If this is right, the fact that consciousness accompanies a given physical process is a further fact, not explainable simply by telling the story about the physical facts. In a sense, the accompaniment must be taken as brute. We might try to systematize and explain these brute facts in terms of some simple underlying pattern, but there will always remain an element here that is logically independent of the physical story. Perhaps we might get some kind of explanation by combining the underlying physical facts with certain further bridging principles that link the physical facts with consciousness, but this explanation will not be a reductive one. The very need for explicit bridging principles shows us that consciousness is not being explained reductively, but is being explained on its own terms. Of course nothing I have said implies that physical facts are irrelevant to the explanation of consciousness. We can still expect physical accounts to play a significant role in a theory of consciousness, giving information about the physical basis of consciousness, for example, and perhaps yielding a detailed correspondence between various aspects of physical processing and aspects of conscious experience. Such accounts may be especially useful in helping to understand the structure of consciousness: the patterns of similarity and difference between experiences, the geometric structure of phenomenal fields, and so on. I say much more about these and other things that physical explanation can tell us about experience in a nonreductive frame- work in Chapter 6. But a physical account, alone, is not enough. At this point, a number of objections naturally arise.Objection 1: Are we setting the standards too high? Some might argue that explanation of any high-level phenomena will postulate "bridge laws" in addition to a low-level account, and that it is only with the aid of these bridge laws that the details of the high-level phenomena are derived. However, as the discussion in the last chapter suggests (and as is carefully argued by Horgan [1978]), in such cases the bridge laws are not further facts about the world. Rather, the connecting principles themselves are logically supervenient on the low-level facts. The extreme case of such a bridging principle is a supervenience conditional, which we have seen is usually a conceptual truth. Other more "localized" bridging principles, such as the link between molecular motion and heat, can at least be derived from the physical facts. For consciousness, by contrast, such bridging principles must be taken as primitive. It is interesting to see how a typical high-level property—such as life, say—evades the arguments put forward in the case of consciousness. First, it is straightforwardly inconceivable that there could be a physical replica of a living creature that was not itself alive. Perhaps a problem might arise due to context-dependent properties (would a replica that forms randomly in a swamp be alive, or be human?), but fixing environmental facts eliminates even that possibility. Second, there is no "inverted life" possibility analogous to the inverted spectrum. Third, when one knows all the physical facts about an organism (and possibly about its environment), one has enough material to know all the biological facts. Fourth, there is no epistemic asymmetry with life; facts about life in others are as accessible, in principle, as facts about life in ourselves. Fifth, the concept of life is plausibly analyzable in functional terms: to be alive is roughly to possess certain capacities to adapt, reproduce, and metabolize. As a general point, most high-level phenomena come down to matters of physical structure and function, and we have good reason to believe that structural and functional properties are logically supervenient on the physical. Objection 2: Couldn't a vitalist have said the same thing about life? All this notwithstanding, a common reaction to the sort of argument I have given is to reply that a vitalist about life might have said the same things. For example, a vitalist might have claimed that it is logically possible that a physical replica of me might not be alive, in order to establish that life cannot be reductively explained. And a vitalist might have argued that life is a further fact, not explained by any account of the physical facts. But the vitalist would have been wrong. By analogy, might not the opponent of reductive explanation for consciousness also be wrong? I think this reaction misplaces the source of vitalist objections. Vitalism was mostly driven by doubt about whether physical mechanisms could perform all the complex functions associated with life: adaptive behavior, reproduction, and the like. At the time, very little was known about the enormous sophistication of biochemical mechanisms, so this sort of doubt was quite natural. But implicit in these very doubts is the conceptual point that when it comes to explaining life, it is the performance of various functions that needs to be explained. Indeed, it is notable that as physical explanation of the relevant functions gradually appeared, vitalist doubts mostly melted away. With consciousness, by contrast, the problem persists even when the various functions are explained. Presented with a full physical account showing how physical processes perform the relevant functions, a reasonable vitalist would concede that life has been explained. There is not even conceptual room for the performance of these functions without life. Perhaps some ultrastrong vitalist would deny even this, claiming that something is left out by a functional account of life—the vital spirit, perhaps. But the obvious rejoinder is that unlike experience, the vital spirit is not something we have independent reason to believe in. Insofar as there was ever any reason to believe in it, it was as an explanatory construct—"We must have such a thing in order to be able to do such amazing stuff." But as an explanatory construct, the vital spirit can be eliminated when we find a better explanation of how the functions are performed. Conscious experience, by contrast, forces itself on one as an explanandum and cannot be eliminated so easily. One reason a vitalist might think something is left out of a functional explanation of life is precisely that nothing in a physical account explains why there is something it is like to be alive. Perhaps some element of belief in a "vital spirit" was tied to the phenomena of one's inner life. Many have perceived a link between the concepts of life and experience, and even today it seems reasonable to say that one of the things that needs to be explained about life is the fact that many living creatures are conscious. But the existence of this sort of vitalist doubt is of no comfort to the proponent of reductive explanation of consciousness, as it is a doubt that has never been overturned. Objection 3: Is conceivability a guide to possibility? Philosophers are often suspicious of arguments that give a key role to conceivability, frequently responding that conceivability does not suffice for possibility. This is a subtle issue that I have discussed earlier and will dis cuss again: but here, the subtleties are not especially relevant. When it comes to matters of explanation, it is clear that conceivability is central. If on reflection we find it conceivable that all these physical processes could take place in the absence of consciousness, then no reductive explanation of consciousness will be satisfactory: the further question of why we exist and not zombies will always arise. Even if conceivability is tied to the limits of human capacity, explanation is tied to the limits of human capacity in a similar way. Another way to put the point is to note that reductive explanation of a phenomenon in terms of the physical requires an a priori implication from the physical facts to the relevant high-level facts (logical supervenience according to primary intension, as I put it earlier). If such a connection does not hold, then we will always be able to raise the further question of why the physical processes give rise to consciousness. We have seen that in almost all domains, the right sort of connection holds, making reductive explanation possible; but it does not seem to hold for conscious experience. One can question whether ontological views such as materialism turn on these a priori links—I discuss that matter in the next chapter—but when it comes to reductive explanation, such links are crucial. Objection 4: Isn't this a collection of circular intuitions? It might be further objected that the arguments I have given consist, at bottom, in a collection of intuitions. There is certainly a sense in which all these arguments are based on intuition, but I have tried to make clear just how natural and plain these intuitions are, and how forced it is to deny them. The main intuition at work is that there is something to be explained—some phenomenon associated with first-person experience that presents a problem not presented by observation of cognition from the third-person point of view. Given the premise that some explanandum is forced on us by first- person experience that is not forced on us by third-person observation, most of the arguments above fall out. It follows immediately, for example, that what needs to be explained cannot be analyzed as the playing of some functional role, for the latter phenomenon is revealed to us by third-person observation and is much more straightforward. The "intuition" at work here is the very raison d'etre of the problem of consciousness. The only consistent way to get around the intuitions is to deny the problem and the phenomenon altogether. One can always, at least when speaking "philosophically," deny the intuitions altogether, and deny that there is anything (apart from the performance of various functions) that needs explaining. But if one takes consciousness seriously, the conclusions for which I am arguing must follow.Objection 5: Doesn't all explanation have to stop somewhere? A final objection is that no explanation gives one something for nothing: all explanation has to stop somewhere. In explaining the motion of the planets, for example, one takes the laws of gravity and the existence of mass for granted. Perhaps we should simply take something for granted in this case, too? I am sympathetic with this point; I think we do have to take something for granted in explaining consciousness. But in doing so we inevitably move beyond a reductive explanation. Indeed, this sort of analogy lends support to the nonreductive position I am advocating. We take the laws of physics for granted because they are fundamental laws. If we take a link between physical processes and conscious experience for granted, this suggests that the link should be taken as fundamental in the same way. I return to this point in the next chapter.
|
考点1:“consciousness/conscious”译为“意识/有意识的”
考点2:“logically supervene”译为“逻辑(地)随附”,逻辑随附(logical supervene)是相对自然随附(natura supervence)而言的固定用法,Chalmers 在The Conscious Mind(即本文节选)中将随附性分为两类,一是全局/局部随附;二是逻辑/自然随附。不应拆分进行翻译
考点3:“conscious experience”译为“意识经验”,“experience”直译可为“经验;体验”,虽然其含义包含个体独特体验的成分,但在哲学文本中,“experience”通常译为“经验”
考点4:“low-level physical story”译为“低阶物理描述”,心灵哲学术语是借鉴语言哲学、数理逻辑的专业术语来的,“low-level”固定译法为“低阶”
考点5:“zombie twin”译为“孪生僵尸”
考点6:“inverted twin”译为“颠倒孪生体”
考点7:“explanatory gap”译为“解释鸿沟”
考点8:“brute facts”译为“原始事实”
考点9:“bridging principles”译为“桥接原则”
考点10:“physical processing”译为“物理加工”,涉及过程状态的描述,一般将之视为“意识加工(conscious processing)”对应的结构,即“物理加工”
考点11:“phenomenal fields”译为“现象(性质)领域”,“phenomenal”是指“现象性质的”,现象性质是指某种独特的内在的非物理的心理体验(如看到某种颜色、感到疼痛等),而不是指某种公共现象,不应翻译为“现象场”,后文“phenomenal fields”翻译为“现象性质”同理
考点12:“conceptual truth”译为“概念真理”
考点13:“epistemic asymmetry”译为“认知反对称(性)”,“asymmetry"一般沿用逻辑学术语译为“反对称(性)”
考点14:“functional terms”译为“功能(性)词项”
考点15:“vitalist”译为“活力论者”,“vitalism”通常译为“活力论”,对应译为“活力论者”。
考点16:“vital spirit”译为“生命精神”或“活力灵魂”等
考点17:“conceivability/possibility”译为“可设想性/可能性”
考点18:“reflection”译为“反思”
考点19:“why we exist and not zombies will always arise”译为“为何是我们存在,而不是僵尸存在,这一问题总是会出现”
考点20:“a priori implication”译为“先天蕴涵”,目前“a priori”在分析哲学文本中有时也可译为“先验”,如在讨论先验/后验必然性的时候,但需要与传统哲学文本的“transcendental(超验)”区别开来,尤其不应对同一个英文词给出两个争议极大的汉译。
考点21:“primary intension”译为“首要(初始)内涵”,源于Chalmers的二维语义学,一般译为“首要(初始)内涵”
考点22:“no explanation gives one something for nothing”译为“没有哪个解释是无中生有的”
考点23:“nonreductive position ”译为“非还原立场”
|
15a36
|
学术论文
|
人文科学
|
118
|
End of preview. Expand
in Data Studio
DiscoX Translation Benchmark
DiscoX is a benchmark for the evaluation of LLMs on discourse- and expert-level translation tasks.
Dataset At A Glance
- Languages: English ⇄ Chinese (100 English→Chinese tasks, 100 Chinese→English tasks)
- Total samples: 200 discourse- and exprt-level translation items
- Average passage length: ~1.7k characters (min 0.73k, max 3.04k)
- Meta fields: primary & secondary domain labels, structured rubrics, prompt IDs,etc
- Reference Rubrics: every task ships with multiple rubrics annotated by experts, capturing key points for evaluating translation quality
Primary domain coverage:
Primary Domain Samples Share 学术论文 (Academic papers) 121 60.5% 非学术论文 (Non-Academic tasks) 79 39.5%
Secondary domain highlights include Social Scienices(社会科学),Natural Sciences(自然科学),Humanities(人文科学),Applied Disciplines(应用学科),News&Information(新闻资讯),Domain-Specific Scenarios(垂类场景) and Literature&Arts(文学艺术).
File Structure
discox.json: the core dataset. Each record containsori_text: the source text to be translatedprompt: text adding translation instructionsreference_list: rubrics designed for evaluating translation resultsPrimary_Domain,Secondary_Domain: high-level topic labelsprompt_id,__internal_uuid__: identifiers for specific tasks
Notes & Recommendations
- The reference_list entries are designed to enable targeted verification of translation fidelity: by converting them into structured checks (e.g., terminology, tone, and named entities), the evaluation performs fine-grained, pointwise assessments of key translation aspects.
- Translation instruction in pormpt describe desired output language in Chinese.
License
Our data is under cc-by-4.0 license.
- Downloads last month
- 40