Evaluación de la traducción

Monográfico
Evaluación de la traducción
I nfo T rad 16 de mayo de 2012



Adewuni, Salawu “Evaluation of interpretation during congregational services and public religious retreats in south-west Nigeria.” Babel: Revue internationale de la traduction/International Journal of Translation vol. 56, n. 2 (2010).  pp. 129-138. http://ejournals.ebsco.com/direct.asp?ArticleID=498294ACF09DAE8130A5

In most spiritual gatherings in Southwest Nigeria, as observed today, preaching is in English or in Yoruba and then interpreted in Yoruba or English. English is an official language in Nigeria and Yoruba is the local language in most of the Southwest of the country. Most people are to some extent bilingual. The objective of this study is to evaluate the quality of the interpretation carried out in those spiritual gatherings. Questionnaires were administered. Data were collated and analyzed. A total of 39 respondents (78%) were satisfied with the output of the interpretation from English to Yoruba while only 48% were satisfied with the interpretation from Yoruba to English. The study concludes that interpretation from English to Yoruba is being handled better and more training be given to those interpreting from Yoruba to English.

Amigó, Enrique Gimenez Jesús and Felisa Verdejo “Procesamiento Lingüistico en métricas de evaluación automática de traducciones.” Procesamiento Lingüistico en métricas de evaluación automática de traducciones vol., n. 43 (2009).  pp. 215-222. http://www.sepln.org/revistaSEPLN/revista/43/articulos/art24.pdf

A pesar de los esfuerzos por incluir procesamiento lingüístico en las métricas de evaluación automática de sistemas de traducción, las más usadas siguen siendo métricas basadas en solapamiento léxico. Esto se debe a que no se han clarificado aún las ventajas del uso de técnicas lingüísticas en este contexto. En este artículo se analiza en profundidad las ventajas de aplicar procesamiento lingüístico a nivel sintáctico y semántico en la evaluación automática de traducciones. (A)

Angelelli, Claudia V. “Validating professional standards and codes: Challenges and opportunities.” Interpreting vol. 8, n. 2 (2006).  pp.:

This article presents a focus group study on the validation of the California Standards for Healthcare Interpreters produced by the California Healthcare Interpreting Association (CHIA) in 2002. The reactions of healthcare interpreters to the Standards, and their opinions and thoughts on its provisions are reviewed and analyzed. The article first addresses the issues and problems healthcare interpreters encounter when implementing the Standards, and highlights the challenges they face when trying to balance their professional mandate with the reality of their working environment. In particular, it describes the difficulties of defining the interpreter’s role in the system. The final section of the article draws attention to the need for bridges between research and practice as a means of guaranteeing that the field of interpreting will continue to develop.

Antia, Bassey E. “Competence and quality in the translation of specialized texts: investigating the role of terminology resources.” Quaderns vol. 6, n. (2001).  pp.: http://ddd.uab.es/search.py?&cc=quaderns&f=issue&p=11385790n6&rg=100&sf=fpage&so=a&as=0&sc=0&ln=ca

The experiment reported here is part of a broader study (Antia, in press). Due to space constraints, the present discussion omits a number of relevant issues, which can however be found in chapter 3 of the broader study. Cognizant of this forum on empirical-experimental research in translation, the current discussion addresses certain issues that were not of primary concern in the main study.

Arevalillo Doval, Juan José “A propósito de la norma europea de calidad para los servicios de traducción.” El español, lengua de traducción vol., n. 2 (2004).  pp.:

El mundo de la traducción ha sufrido una indudable revolución en los últimos años, fomentada en gran medida por la aplicación de la informática a los usos diarios del traductor. De hecho, en un período de tiempo relativamente corto, el traductor ha pasado de trabajar con pluma y máquina de escribir a manejar los procesadores de texto más complejos del mercado. Tanto es así, que incluso el dictáfono, hasta hace poco uno de los dispositivos favoritos de los traductores, se ha visto relegado por los programas informáticos que permiten al traductor dictarle al ordenador la traducción a la vista para que éste transcriba el texto en la pantalla con un sorprendente índice de acierto. No cabe ninguna duda de que la informática ha sacado al traductor de su legendario aislamiento y ha abierto ante él una larga serie de recursos de todo tipo que facilitan su tarea hasta un punto impensable hasta hace bien poco y que ayudan a superar las antiguas barreras del tiempo y la distancia. Por otro lado, Internet, los programas de traducción asistida, los procesadores de texto, los utensilios terminológicos y otros programas compartidos con otros sectores laborales, han disparado la productividad del traductor actual.

Arevalillo, Juan José “Componentes principales de un programa informático I.” La linterna del traductor vol., n. 8 (2004).  pp.: http://traduccion.rediris.es/loca2.htm

En lo que se refiere a la localización, cuatro son los componentes que nos interesan desde el punto de vista del traductor: interfaz de usuario, ayuda en línea, documentación impresa y material complementario. Quedan excluidos de este artículo los procesos complementarios no relativos a la traducción. A continuación se explican las características principales de cada uno de ellos.

Arevalillo, Juan José “Componentes principales de un programa informático I I.” La linterna del traductor vol., n. 8 (2004).  pp.: http://traduccion.rediris.es/loca3.htm

Posiblemente sea el componente más voluminoso en cuanto a número de palabras. Antes competía con la documentación impresa a este respecto, pero la tendencia actual apunta a elaborar textos de ayuda completos con numerosos vínculos internos, en detrimento de la documentación impresa que va viendo cómo se reduce el número de manuales, con los consiguientes ahorros en el coste de producción. Este componente no llega al grado técnico de la interfaz de usuario, pero sí requiere ciertos conocimientos, ya que los textos traducidos deben compilarse también para conseguir los archivos de ayuda a los que puede acceder el usuario para su consulta. Existen dos tipos principales de ayuda: WinHelp y HTML.

Arevalillo, Juan José “Presencia de la localización en el mercado y su formación específica.” La linterna del traductor vol., n. 8 (2004).  pp.: http://traduccion.rediris.es/loca4.htm

Según cálculos de la American Translators Association (ATA), el 10 % de la producción de traducción mundial se centra en la traducción literaria. Si esos datos se ajustan a la realidad actual, dentro de ese 90 % restante la localización puede disfrutar de una buena porción de la tarta. LISA (2003: 21) en su Guía de introducción al sector de la localización habla de las cifras siguientes: LISA considera que el tamaño total del sector de la localización mundial asciende a un mínimo de 3.700 millones de dólares anuales, con una cifra probable en torno a los 5.000 millones (algunos cálculos apuntan más alto a los 15.000 millones de dólares). El segmento de la tecnología de la información dentro del sector de la localización por sí sola mueve cerca de los 10.000 millones de dólares (con la inclusión de todos los mercados verticales, este número es sustancialmente superior). Por poner una comparación, las cifras recientes del tamaño del sector de la traducción se encuentran entre los 11.000 y los 18.000 millones de dólares (según la Asociación de Traductores Americanos, ATA) y los 30.000 millones de dólares (según la Comisión Europea).

Bastin, Georges L. “Evaluating Beginners’ Re-expression and Creativity: A Positive Approach.” The Translator vol. 6, n. 2 (2000).  pp.: http://www.stjerome.co.uk/periodicals/viewfile.php?id=193&type=pdf

Although trunslation may be considered a tvo (or three) phase cornmunication process, consisting of comprehension -(conceptualization) – re-expression, most theoretical and pedagogical studies have been devoted to comprehension and conceptualization. There is, however, an increasing need to establish a theoretical basis for the third phase since, contrury to Boileau’s dictum (that well conceived ideas can be easily expressed), even when comprehension is complete, words do not come easily. If re-expression is to be better taught, evaluation of reexpression must be better thought. This paper focuses on the evuluation of re-expression in translation. Based on an in-depth study of various English texts translated into French by sorne 38 ftrst-year translation students

Bel, N. Â Ria “Review of Dubkjaer, Laila; Hemsen, Holmer; Minker, Wolfgang (eds) Evaluation of Text and Speech Systems.” Machine Translation vol. 21, n. 1 (2007).  pp. 73-76. http://dx.doi.org/10.1007/s10590-008-9037-2

Beverly, Adab “The Translation of Advertising A Framework for Evaluation.” Babel: Revue internationale de la traduction/International Journal of Translation vol. 47, n. 2 (2002).  pp.: http://www.ebsco.com/online/direct.asp?ArticleID=1GW7R7QFH6JVJQ8HQBQ9

In Towards a Science of Translating, (1969) Nida asserts that “There will always be a variety of valid answers to the question, ‘Is this a good translation?’” In the professional translation environment, the whole question of how to evaluate a translated text is one which poses a challenge to the client, to the translator and to those responsible for training the translator. Much has been written about the difficulty of identifying (objectively) verifiable and perhaps more widely generalisable criteria for this form of evaluation, which needs to relate to the functional adequacy (Nord 1997, Toury 1995) of the translated text for its intended purpose. Such criteria would be equally welcome as guidelines for the actual translation process, to assist the translator in selecting from possible translation alternatives. Think aloud protocols have tried to identify what goes on the ‘lack box’ and the cognitive processes involved in the process of text production (Kussmaul 1991, 1995). However, TAPS are a means to an end, the end being the aim of achieving a better understanding of the process in order to minimise the occurrence of potential errors and rationalise and optimise the process. This article attempts to show how Descriptive Analysis (see Toury 1995) of text pairs can highlight potentially successful strategy types, in relation to aspects of a functionalist approach to text production. Having determined which text production criteria can be of use in evaluating the potential success of a translation choice within a text, it should be possible to formulate a set of guidelines against which translators could test choices.at micro-and macro-textual levels.

Bowker, Lynne “A Corpus-Based Approach to Evaluating Student Translations.” The Translator vol. 6, n. 2 (2000).  pp.: http://www.stjerome.co.uk/periodicals/viewfile.php?id=191&type=pdf

Translation evaluation is highly problematic because of its subjective nature. In a translation classroom, efforts must be made to develop un approach to translation evaluation thut enables evuluators to provide objective and constructive feedback to their students. This article describes a specially-designed Evaluation Corpus and presents an experiment which demonstrates that such a corpus can be used to significantly reduce the subjective element in translation evaluation and illustrates that this reduced subjectivity will benefit both evaluators and students.

Bowker, Lynne “Towards a Methodology for a Corpus-Based Approach to Translation Evaluation.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/002135ar.pdf

Translation evaluation is undoubtedly one of the most difficult tasks facing a translator trainer. It is unlikely that there will ever be a ready-made formula that will transform this task into a simple one; however, this article suggests that the task can be made some what easier by using a specially designed Evaluation Corpus that can act as a benchmark against which translator trainers can compare student translations.

Brunette, Louise “Towards a Terminology for Transiation Quality Assessment : A Comparison of TQA Practices.” The Translator vol. 6, n. 2 (2000).  pp.: http://www.stjerome.co.uk/periodicals/viewfile.php?id=190&type=pdf

Recent research on the revision and assessment of general texts has revealed that the terms and concepts used in discussing this process are somewhat confused, hence the need to map out the terminology used in various evaluative practices. This article offers an overview of translation assessment and attempts to define the key terms specific to this field, including subfields such us translation management quality control (assessment; formative revision) as well us revision theory (assessment criteria; purpose). Each concept and term is discussed at length and exemplified. The article focuses initially on various assessment procedures, including pragmatic revision, translation quality assessment, quality control, didactic revision, and fresh look’. For these procedures to be scientifically credible and ethically acceptable, they must be hased on clearly defined criteria. Thus, the second part of the article puts forvard criteria which have been delimited and duly tested in prior research, namely: logic, context, purpose and language norm.

Buendía Castro, Miriam and José Manuel Ureña Gómez-Moreno “¿Cómo diseñar un corpus de calidad?: parámetros de evaluación.” Sendebar: Revista de la Facultad de Traducción e Interpretación vol., n. 21 (2010).  pp. 165-180.

Cáceres Würsig, Ingrid, Luis Pérez González, et al. “Calidad y traducción: perspectivas académicas y profesionales.” Panacea : boletín de medicina y traducción vol. 5, n. 16 (2004).  pp.: http://www.medtrad.org/panacea/PanaceaAnteriores.htm

Los días 25, 26 y 27 de febrero de 2004 se celebraron las IV Jornadas sobre la Formación y la Profesión del Traductor e Intérprete, «Calidad y traducción: perspectivas académicas y profesionales», organizadas por el Departamento de Traducción e Interpretación de la Facultad de Comunicación y Humanidades de la Universidad Europea de Madrid. Patrocinadas por destacadas empresas del sector (Star, Reinisch, Déjà Vu y Hermes), estas IV Jornadas atrajeron un total de 251 asistentes de 19 países. Para abordar el tema general de las Jornadas, se invitó a tres ponentes de distintos ámbitos de la enseñanza y el ejercicio de la traducción y la nterpretación. Emma Wagner habló de la calidad en la traducción en organismos internacionales, Daniel Gile, de la calidad en la enseñanza de la traducción y la interpretación y, finalmente, Miguel Núñez, de la participación de la ACT en la elaboración de una norma de calidad para los servicios profesionales de traducción. (más información sobre las actas a través de birgit.s@ing.fil.uem.es).

Callison-Burch, Chris and Raymond S. Flournoy “A Program for Automatically Selecting the Best Output from Multiple Machine Translation Engines.” Congreso sobre traducción automática vol., n. 8 (2001).  pp.: http://www.eamt.org/summitVIII/papers/callison.pdf

This paper describes a program that automatically selects the best translation from a set of translations produced by multiple commercial machine translation engines. The program is simplified by assuming that the most fluent item in the set is the best translation. Fluency is determined using a trigram language model. Results are provided illustrating how well the program performs for human ranked data as compared to each of its constituent engines.

Campbell, Stuart “Critical Structures in the Evaluation of Translations from Arabic into English as a Second Language.” The Translator vol. 6, n. 2 (2000).  pp.: http://www.stjerome.co.uk/periodicals/viewfile.php?id=192&type=pdf

It is argued in this paper that the output of translators working into English as a second lunguage can be evuluated by means of examining their ability to translate certain critical structures. These claims are made on the basis of duta-based research with the support of a cognitive theory about language processing during translation, and an analytical procedure that models the decision pathways of translators.

Cancelo, Pablo “Evaluation of machine translation systems.” Últimas Corrientes Teóricas En Los Estudios de Traducción y sus Aplicaciones vol., n. (2001).  pp.:

Machine translation products are currently receiving a considerable amount of hype. At one end of the scale are mass media reports on one product after another that use the latest magical technique to produce nearly perfect translations. Unfortunately, these reports are usually based on the manufacturers’ promotional press releases, and make it into print without any attempt at verification or review. At the other end of the spectrum are the detractors of machine translation, those who assert that all translation programs are useless, and the whole effort is a meaningless waste of time. In the middle, however, is another group of people – of which this researcher is one – who hold that machine translation technology, while not perfect, has progressed in recent years and some of the systems can render a source language document into an understandable, though rough, target language translation.

Clifford, Andrew “Discourse Theory and Performance-Based Assessment : Two Tools for Professional Interpreting.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/002345ar.pdf

This article examines interpreter assessment and draws attention to the limits of a lexico-semantic approach. It proposes using features of discourse theory to identify so me of the competencies needed to interpret and suggests developing assessment instruments with the technical rigour common in other fields. The author gives examples of discursive features in interpretation and shows how these elements might be used to construct a rubric for assessing interpreter performance.

Colina, Sonia “Translation Quality Evaluation: Empirical Evidence for a Functionalist Approach.” The Translator vol. 14, n. 1 (2008).  pp. 97-134. http://www.stjerome.co.uk/periodicals/journal.php?j=72&v=563&i=564

Following a review of existing approaches to translation quality evaluation, this paper describes a proposal for evaluation that addresses some of the deficiencies found in these models. The proposed approach is referred to as componential because it evaluates components of quality separately, and functionalist, because evaluation is carried out relative to the function specified for the translated text. In order to obtain some empirical evidence for the functionalist/componential approach, a tool was developed and pilot-tested for inter-rater reliability. In addition, the research project sought to obtain some data on qualifications of raters/users and their performance using the tool. Forty raters were asked to use the tool to rate three translated texts. The texts selected for evaluation consisted of reader-oriented health education materials. Raters were bilinguals, professional translators and language teachers. Some basic training was provided. Data was collected by means of the tool and a questionnaire. Results indicate good inter rater reliability for the tool; teachers’ and translators’ ratings were more alike than those of bilinguals; bilinguals were found to rate higher and faster than the other groups. The results provide support for further research and testing of this tool and offer evidence in favour of the approach proposed.

Colina, Sonia “Further evidence for a functionalist approach to translation quality evaluation.” Target: International Journal on Translation Studies vol. 21, n. 2 (2009).  pp. 235-264. http://dx.doi.org/10.1075/target.21.2.02col

Colina (2008) proposes a componential-functionalist approach to translation quality evaluation and reports on the results of a pilot test of a tool designed according to that approach. The results show good inter-rater reliability and justify further testing. The current article presents an experiment designed to test the approach and tool. Data was collected during two rounds of testing. A total of 30 raters, consisting of Spanish, Chinese and Russian translators and teachers, were asked to rate 4-5 translated texts (depending on the language). Results show that the tool exhibits good inter-rater reliability for all language groups and texts except Russian and suggest that the low reliability of the Russian raters’ scores is unrelated to the tool itself. The findings are in line with those of Colina (2008).

Conde Ruano, Tomás “Propuestas para la evaluación de estudiantes de traducción.” Sendebar: Revista de la Facultad de Traducción e Interpretación vol., n. 20 (2009).  pp. 231-255.

Se presenta un estudio realizado a partir del análisis de una situación problemática: la evaluación de traducciones en el entorno académico. Primero se detallan las circunstancias más habituales en las que se lleva a cabo, luego se describen los principales problemas y finalmente se esgrimen propuestas concretas, fundamentadas en datos extraídos tanto de la investigación empírica, como de otros trabajos teóricos.Se propone, en resumen, el uso de la evaluación continua, la evaluación ciega, sistemas holísticos en los que se integren aspectos subjetivos y se valore el aprendizaje por encima de la actuación concreta, una redefinición de los roles adoptados por profesores y alumnos, mayor flexibilidad y transparencia en la aplicación de criterios de evaluación, el fomento del aprendizaje autónomo y la colaboración entre los estudiantes y exámenes más cercanos a la práctica profesional real. [English abstracts ] This paper analyses a problematic situation: translation evaluation in the teaching environment. After describing the circumstances under which this activity is carried out, the paper focuses on the main problems concerned and finally makes various proposals based on data both from empirical research and from other theoretical studies. In short, the paper argues in favour of continuous assessment, blind evaluation and holistic systems involving subjective aspects and the assessment of learning, rather than of specific performance. In addition, a fresh interpretation of the roles adopted by teachers and students is proposed, together with a more flexible and transparent application of evaluation criteria, the promotion of self-regulated learning and collaboration between students, and exam conditions that resemble actual professional practice.

Darwin, Maki “Trial and Error: An Evaluation Project on Japanese <> English MT Output Quality.” Congreso sobre traducción automática vol., n. 8 (2001).  pp.: http://www.eamt.org/summitVIII/papers/darwin.pdf

This paper describes a small-scale but organized attempt to evaluate output quality of several Japanese MT systems. The project also served as the first experiment of the implementation of the in-house MT evaluation guidelines created in 2000. Since time was limited and the budget was not infinite, it was launched with the following compact components: Five people; 300 source sentences per language pair; and 160 hours per evaluator. The quantitative results showed noteworthy phenomena. Although the test materials had been presented in a way that evaluators could not identify the performance of any particular system, the results were quite consistent.

Dean, Robyn K. and Robert Q. Pollard Jr “Effectiveness of Observation-Supervision Training in Community Mental Health Interpreting Settings.” redit: Revista electrónica de didáctica de la traducción y la interpretación vol., n. 3 (2009).  pp. 1-17. http://dialnet.unirioja.es/servlet/extart?codigo=3150216

Observation-supervision (O-S) is a problem-based learning approach to interpreter education. This mixed-methods study implemented O-S over four geographically diverse iterations in community mental health settings. Forty American Sign Language interpreters participated in O-S groups and forty others comprised two control groups. Measures included a pre-post test of mental health knowledge, a mental health interpreting practical exam, and objective and subjective participant evaluations. The results indicate that O-S was superior to an equivalent amount of didactic training in imparting mental health knowledge. Practical exam and participant evaluation results indicate that O-S was more effective in imparting interpreting judgment and ethical decision-making skills. O-S can be employed in other specialized interpreting practice settings and with spoken as well as signed language interpreters.

Delisle, Jean “L’évaluation des traductions par l’historien.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/002514ar.pdf

Une méthode rigoureuse d’évaluation des traductions est nécessaire a I’historien tout comme au pédagogue de la traduction. En nous inspirant des travaux théoriques d’Henri Meschonnic, nous tenterons de démontrer que I’évaluation des traductions du passé – des textes littéraires principalement – ne saurait se faire a partir des regles édictées par les traducteurs-théoriciens auteurs de traités sur la maniere de traduire, et que I’analyse philologique et la linguistique différentielle ne suffisent pas non plus pour apprécier la réussite ou I’échec d’un texte traduit. l’historien de la traduction cherchera plutót a savoir si I’reuvre traduite a I’historicité de I’reuvre originale, si la traduction-recréation a inventé sa propre poétique et remplacé les problemes de langue par des solutions de discours. Traduire uniquement le sens d’une reuvre comporte le risque d’escamoter sa littérarité et sa poétique, ce qui aboutit a la production d’un non-texte.

Dewaele, Jean-Marc “Évaluation du texte interprété: sur quoi se basent les interlocuteurs natifs?” Meta vol. 39, n. 1 (1994).  pp.: http://www.erudit.org/revue/meta/1994/v39/n1/002561ar.pdf

Una de las reglas de oro de la interpretación es que sólo se puede trabajar hacia una lengua materna. No obstante, con frecuencia se exige a los profesionales que interpreten hacia una segunda lengua. Pero, ¿cuáles son las características lingüísticas de este tipo de discurso y el juicio que el propio intérprete tiene el propio profesional sobre su capacidad de comunicación en una lengua de llegada? Para responder a esta pregunta se ha llevado a cabo un estudio que recoge la opinión de un número determinado de nativos sobre un orador que también lo es. Las variables en las que se basan los primeros para emitir sus opiniones tienen que ver sobre todo con la riqueza léxica del discurso, las dudas o interrupciones, etc. Sin embargo, sería necesario realizar un análisis similar sobre lo que piensan los nativos de un orador no nativo.

Dhonnchadha, U. Â, Caoilfhionn Nic Ph+Â-Íid+Â-¡N, et al. “Design, Implementation and Evaluation of an Inflectional Morphology Finite State Transducer for Irish.” Machine Translation vol. 18, n. 3 (2003).  pp. 173-193. http://dx.doi.org/10.1007/s10590-004-2480-9

Minority languages must endeavour to keep up with and avail of language technology advances if they are to prosper in the modern world. Finite state technology is mature, stable and robust. It is scalable and has been applied successfully in many areas of linguistic processing, notably in phonology, morphology and syntax. In this paper, the design, implementation and evaluation of a morphological analyser and generator for Irish using finite state transducers is described. In order to produce a high-quality linguistic resource for NLP applications, a complete set of inflectional morphological rules for Irish is handcrafted, as is the initial test lexicon. The lexicon is then further populated semi-automatically using both electronic and printed lexical resources. Currently we achieve coverage of 89% on unrestricted text. Finally we discuss a number of methodological issues in the design of NLP resources for minority languages.

Espunya, Anna “Contrastive and translational issues in rendering the English progressive form into Spanish and Catalan: an informant-based study’.” Meta vol. 46, n. 3 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n3/002710ar.pdf

This is a study on the formal correspondences for the English progressive in translations from English to Spanish and Catalan, with a special focus on the choice between simple and progressive forms. Its methodological approach includes the participation of informants both as translators and as evaluators of published translations. The paper discusses both the language-internal and task-related factors that play a role in the choice of verb forms.

Estival, Dominique “Karen Sparck Jones & Julia R. Galliers, Evaluating Natural Language Processing Systems: An Analysis and Review. Lecture Notes in Artificial Intelligence 1083.” Machine Translation vol. 12, n. 4 (1997).  pp. 375-379. http://dx.doi.org/10.1023/A:1007918307730

Fan, May and Xu Xunfeng “An evaluation of an online bilingual corpus for the self-learning of legal English.” System vol. 30, n. 1 (2002).  pp.: http://www.sciencedirect.com/science/article/B6VCH-44HX9WX-1/2/a9eccf25e787c2cb5f0aaf1d2b19ba49

Based on a relatively simple but innovative idea of inserting hyperlinks at the sentence level between parallel texts, a bilingual corpus of legal and documentary texts in English and Chinese has been created and made available online together with a web-based concordancer. In addition to introducing such a corpus, this paper reports a study which seeks to evaluate the usefulness of the corpus in the self-learning of legal English. The subjects involved were a group of Chinese students doing a degree in Translation in a university of Hong Kong, where English Common Law is still used after the handover in 1997 when the sovereignty of Hong Kong was restored from Britain to China. The instruments for data collection included two comprehension tasks, a questionnaire and a follow-up interview. Findings of the study indicate that students considered the bilingual corpus useful as they needed both language versions in the understanding of legal provisions though they were found to rely more on Chinese. Interesting data in relation to how users of the bilingual corpus switched between the two languages have also been obtained. This paper also investigates how the inherent characteristics of legal English contribute to the comprehension difficulty of L2 learners irrespective of the help obtained from the bilingual corpus.

Farreús, Mireia, Marta R. Costa-Jussà, et al. “Study and correlation analysis of linguistic, perceptual, and automatic machine translation evaluations.” Journal of the American Society for Information Science and Technology vol. 63, n. 1 (2012).  pp. 174-184. http://dx.doi.org/10.1002/asi.21674

Evaluation of machine translation output is an important task. Various human evaluation techniques as well as automatic metrics have been proposed and investigated in the last decade. However, very few evaluation methods take the linguistic aspect into account. In this article, we use an objective evaluation method for machine translation output that classifies all translation errors into one of the five following linguistic levels: orthographic, morphological, lexical, semantic, and syntactic. Linguistic guidelines for the target language are required, and human evaluators use them in to classify the output errors. The experiments are performed on English-to-Catalan and Spanish-to-Catalan translation outputs generated by four different systems: 2 rule-based and 2 statistical. All translations are evaluated using the 3 following methods: a standard human perceptual evaluation method, several widely used automatic metrics, and the human linguistic evaluation. Pearson and Spearman correlation coefficients between the linguistic, perceptual, and automatic results are then calculated, showing that the semantic level correlates significantly with both perceptual evaluation and automatic metrics.

Fawcett, Peter “Translation in the Broadsheets.” The Translator vol. 6, n. 2 (2000).  pp.: http://www.stjerome.co.uk/periodicals/viewfile.php?id=196&type=pdf

Despite the decline in literary translation into English documented by sorne scholars, the broadsheets and review journals published in the United Kingdom continue to invite reviewers – who are themselves usually creative authors in their own right -to review translated literuture. Occasionally, broader questions of translation are also discussed. This puper examines a sample of such reviews in un atternpt to uncover the parameters defining the usually implicit framework within which translation criticism is conducted and what seems to be the overvhelrningly prefrrred translation strategy.

Fuji, Masaru and Hitoshi Isahara “Evaluation Method for Determining Groups of Users Who Find MT “Useful”.” Congreso sobre traducción automática vol., n. 8 (2001).  pp.: http://www.eamt.org/summitVIII/papers/fuji.pdf

We used two commercial E-J MT systems to prepare MT translated reading texts. We used two systems, in order to find out if the obtained results are system dependent.

Fulford, Heather “Translation Tools: An Exploratory Study of their Adoption by UK Freelance Translators.” Machine translation vol. 16, n. 4 (2001).  pp.: http://ipsapp007.lwwonline.com/content/getfile/4598/16/3/fulltext.pdf

The rising demand for translations over the last few decades has led to the recognition that software tools were urgently needed to help increase translators’ productivity, and to support them in their efficient and effective delivery of accurate and consistent translations in ever-shorter time periods (Lang and Bennett, 2000: 203). In order to help inform and guide this software development, a number of researchers discussed the nature of the support required by translators.

Gabr, Moustafa “Program Evaluation : A Missing Critical Link in Translator Training.” The Translation Journal vol. 5, n. 1 (2001).  pp.: http://accurapid.com/journal/15training.htm

Translation, being a craft on the one hand, requires training, i.e. practice under supervision, and being a science on the other hand, has to be based on language theories. Therefore, any sound approach to translation teaching has to draw on proper training methodologies. Training focuses on the improvement of the knowledge, skills and abilities of the individual, and it is functional and relevant only when it is evaluated (Zenger and Hargis, 1982). When we evaluate a training course, we actually evaluate its effectiveness, i.e. we measure the achievement of its objectives. A training course can be effective in meeting some objectives and be ineffective in meeting others. For example, a translation course may accomplish its objective of improving the the students’ text analysis skills and fail in promoting their cross-cultural awareness.

Gamon, Michael, Hisami Suzuki, et al. “Using Machine Learning for System-Internal Evaluation of Transferred Linguistic Representations.” Congreso sobre traducción automática vol., n. 8 (2001).  pp.: http://www.eamt.org/summitVIII/papers/gamon.pdf

We present an automated, system-internal evaluation technique for linguistic representations in a large-scale, multilingual MT system. We use machine-learned classifiers to recognize the differences between linguistic representations generated from transfer in an MT context from representations that are produced by ‘native’ analysis of the target language. In the MT scenario, convergence of the two is the desired result. Holding the feature set and the learning algorithm constant, the accuracy of the classifiers provides a measure of the overall difference between the two sets of linguistic representations: classifiers with higher accuracy correspond to more pronounced differences between representations. More importantly, the classifiers yield the basis for error-analysis by providing a ranking of the importance of linguistic features. The more salient a linguistic criterion is in discriminating transferred representations from ‘native’ representations, the more work will be needed in order to get closer to the goal of producing native-like MT. We present results from using this approach on the Microsoft MT system and discuss its advantages and possible extensions.

Garcia Alvarez, Ana Maria “Der translatoerische Kommentar als Evaluationsmodell der studentischen Überzetzungsprozesse.” Lebende sprachen vol. 53, n. 1 (2008).  pp. 26-31.

el comentario como un modelo de evaluación de los estudiantes de traducción

Gerzymisch-Arbogast, Heidrun “Equivalence Parameters and Evaluation.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/002886ar.pdf

Cet article traite du raje des parametres d’équivalence dans I’évaluation de traductions. Apres un rapide survol des discussions inhérentes au concept même de I’équivalence, nous proposons d’examiner ce concept sur deux niveaux: celui du systeme A la base duquelles criteres d’évaluation sont établis et, celui du texte qui permet la sélection des criteres spécifiques pour évaluer le texte en question ainsi qu’une hiérarchisation de ces criteres (du point de vue de I’évaluateur). Au niveau du systeme nous proposons d’inclure dan s le catalogue des criteres d’évaluation les parametres de cohérence et de réseaux thématiques et/ou isotopiques. Au niveau du texte nous allons discuter quelques variances de traductions inhérentes A ces parametres.

Ghassan Hassan Al, Shatter “Implementation and Evaluation of a New Learning Approach in Arabic: Implications for Translator Training.” Translation Watch Quarterly vol. 3, n. 1 (2007).  pp.:

This paper discusses planning and implementing a new learning approach for teaching Arabic as part of the University General Requirements Unit at the United Arab Emirates University. The new learning approach challenges the traditional teaching methodology used in the United Arab Emirates. The planning and implementation scheme is analyzed, and training, teaching style, and classroom management processes are evaluated. The study examines responses by the University administration, faculty members, and students to the introduction of this new teaching methodology. It suggests that teaching standard Arabic as part of the University’s general education requirements is important for Arab students who wish to be successful in their studies at the University as well as in their professional lives. The implications for translators are also addressed.

Gile, Daniel “L’évaluation de la qualité de l’interprétation en cours de formation.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/002890ar.pdf

L’évaluation de la qualité de I’interprétation en cours de formation differe de I’évaluation professionnelle essentiellement en raison de sa fonction d’orientation et de la part importante qu’elle accorde au processus d’interprétation, par opposition au seul discours d’arrivée. Il est pro posé de faire appel a une évaluation orientée processus en début de formation, en raison de ses avantages psychologiques aussi bien que pour son efficacité dans I’orientation des étudiants. 1I faudra toutefois passer progressivement a une évaluation orientée produit afin de rendre plus puissante I’action de I’enseignant sur le parachevement du produit et pour préparer les étudiants aux tests d’aptitude de fin de parcours. L:éventuelle différence entre les normes des enseignants et celles du marché ne pose pas de probleme fondamental tant qu’elle porte sur le niveau requis, plus élevé en formation, et non pas sur les normes et stratégies de I’interprete.

Goff-Kfouri, Carol Ann “Testing and Evaluation in the Translation Classroom.” The Translation Journal vol. 8, n. 3 (2004).  pp.: http://accurapid.com/journal/29edu.htm

It is not at all uncommon today for professional translators to be invited to teach a course at a university. Many translators, though flattered at being invited to teach, are hesitant to accept the position due to their lack of pedagogical knowledge. One particular problematic area is that of marking translations and making decisions on student competence. This paper presents the basic information professional translators need to know before they enter the classroom, and outlines possible testing strategies they might use to make their teaching experience enriching and valuable for themselves as well as their students.

Guessoum, Ahmed and Rached Zantout “A Methodology for a Semi-Automatic Evaluation of the Lexicons of Machine Translation Systems.” Machine translation vol. 16, n. 2 (2001).  pp.: http://ipsapp009.lwwonline.com/content/getfile/4598/14/3/abstract.htm

The lexicon is a major part of any Machine Translation (MT) system. If the lexicon of an MT system is not adequate, this will affect the quality of the whole system. Building a comprehensive lexicon, i.e., one with a high lexical coverage, is a major activity in the process of developing a good MT system. As such, the evaluation of the lexicon of an MT system is clearly a pivotal issue for the process of evaluating MT systems. In this paper, we introduce a new methodology that was devised to enable developers and users of MT Systems to evaluate their lexicons semi-automatically. This new methodology is based on the idea of the importance of a specific word or, more precisely, word sense, to a given application domain. This importance, or weight, determines how the presence of such a word in, or its absence from, the lexicon affects the MT system’s lexical quality, which in turn will naturally affect the overall output quality. The method, which adopts a black-box approach to evaluation, was implemented and applied to evaluating the lexicons of three commercial English–Arabic MT systems. A specific domain was chosen in which the various word-sense weights were determined by feeding sample texts from the domain into a system developed specifically for that purpose. Once this database of word senses and weights was built, test suites were presented to each of the MT systems under evaluation and their output rated by a human operator as either correct or incorrect. Based on this rating, an overall automated evaluation of the lexicons of the systems was deduced.

Guessoum, A. and R. Zantout “Semi-automatic Evaluation of the Grammatical Coverage of Machine Translation Systems.” Congreso sobre traducción automática vol., n. 8 (2001).  pp.: http://www.eamt.org/summitVIII/papers/guessoum.pdf

In this paper we present a methodology for automating the evaluation of the grammatical coverage of machine translation (MT) systems. The methodology is based on the importance of unfolded grammatical structures, which represent the most basic syntactic pattern for a sentence in a given language. A database of unfolded grammatical structures is built to evaluate the parser of any NLP or MT system. The evaluation results in an overall measure called the grammatical coverage. The results of implementing the above approach on three English-to-Arabic commercial MT systems are presented.

Guessoum, Ahmed and Rached Zantout “A Methodology for Evaluating Arabic Machine Translation Systems.” Machine Translation vol. 18, n. 4 (2004).  pp. 299-335. http://dx.doi.org/10.1007/s10590-005-2412-3

This paper presents a methodology for evaluating Arabic Machine Translation (MT) systems. We are specifically interested in evaluating lexical coverage, grammatical coverage, semantic correctness and pronoun resolution correctness. The methodology presented is statistical and is based on earlier work on evaluating MT lexicons in which the idea of the importance of a specific word sense to a given application domain and how its presence or absence in the lexicon affects the MT system+óGé¼Gäós lexical quality, which in turn will affect the overall system output quality. The same idea is used in this paper and generalized so as to apply to grammatical coverage, semantic correctness and correctness of pronoun resolution. The approach adopted in this paper has been implemented and applied to evaluating four English-Arabic commercial MT systems. The results of the evaluation of these systems are presented for the domain of the Internet and Arabization.

Hagemann, S. “Zur Evaluierung kreativer ¦obersetsungsleistungen.” Lebende sprachen vol. 52, n. 3 (2007).  pp. 102-109.

Zur Evaluierung kreativer ¦obersetsungsleistungen.

Hajmohammadi, A. “Translation Evaluation in a News Agency.” Perspectives-Studies in Translatology vol. 13, n. 3 (2005).  pp.:

In this article, I argue that most approaches to translation evaluation that are central to Translation Studies scholars and teachers are out of touch with market demands. I present the working conditions of translators in a news agency and discuss the evaluation of translation performance in the market. I am particularly keen on calling attention to the differences between academic and market parameters for evaluation. First, there is a presentation of the purpose of evaluation in news agency environments and subsequently, I describe the assessment of news translation. I finish by examining two parameters of evaluation, which, in my opinion, distinguish translation evaluation in the market from the academy. The suggestions are based on my observations as a translation evaluator in IRIB news agency, Tehran.

Hale, Sandra Beatriz, Nigel Bond, et al. “Interpreting accent in the courtroom.” Target vol. 23, n. 1 (2011).  pp. 48-61. http://www.ingentaconnect.com/content/jbp/targ/2011/00000023/00000001/art00004
http://dx.doi.org/10.1075/target.23.1.03hal

Findings from research conducted into interpreted court proceedings have suggested that it is the interpreters’ rendition that the judiciary and jurors hear and upon which they base their evaluations of witnesses’ testimony. Previous research into the effect of foreign accent of witnesses indicated particular foreign accents negatively influence mock jurors’ evaluations of the testimony. The aim of this study was to examine the effect of interpreters’ foreign accents on the evaluation of witnesses’ testimony. Contrary to previous research, our results indicated that participants rated the witness more favourably when testimony was interpreted by an interpreter with a foreign language accent. Accented versions were all rated as more credible, honest, trustworthy and persuasive than the non-accented versions. This paper discusses the findings in the light of methodological concerns and limitations, and highlights the need for further research in the area.

Hampshire, Stephen and Carmen Porta Salvia “Traslation and the Internet: evaluating the Quality of Free Online Machine Translators.” Quaderns: Revista de traducció vol., n. 17 (2010).  pp. 197-209. http://ddd.uab.cat/pub/quaderns/11385790n17p197.pdf

The late 1990s saw the advent of free online machine translators such as Babelfish, Google Translate and Transtext. Professional opinion regarding the quality of the translations provided by them, oscillates wildly from the «laughably bad» (Ali, 2007) to «a tremendous success» (Yang and Lange, 1998). While the literature on commercial machine translators is vast, there are only a handful of studies, mostly in blog format, that evaluate and rank free online machine translators.This paper offers a review of the most significant contributions in that field with an emphasis on two key issues: (i) the need for a ranking system; (ii) the results of a ranking system devised by the authors of this paper. Our small-scale evaluation of the performance of ten free machine translators (FMTs) in «league table» format shows what a user can expect from an individual FMT in terms of translation quality. Our rankings are a first tentative step towards allowing the user to make an informed choice as to the most appropriate FMT for his/her source text and thus produce higher FMT target text quality. Durant la darrera dècada del segle xx s introdueixen els traductors online gratuïts (TOG), com poden ser Babelfish, Google Translate o Transtext. L opinió per part de la crítica professional sobre aquestes traduccions es mou des d una ingrata ridiculització (Ali, 2007) a l acceptació més incondicional (Yang y Lange, 1998). Actualment, els estudis valoratius sobre els TOG són realment escassos, la majoria en format blog, mentre que la literatura sobre els traductors comercials és enorme. L article que plantegem aporta una revisió de les principals contribucions i posa l èmfasi bàsicament en dues qüestions: (i) necessitat d un sistema de classificació (un rànquing) i (ii) descripció dels resultats obtinguts pel sistema de classificació ideat pels autors d aquest article.L avaluació que realitzem a petita escala es basa en l anàlisi de l actuació de deu TOG en un rànquing que posa de manifest les expectatives que en termes de qualitat de traducció pot esperar l usuari. El resultat del rànquing ofereix a l usuari els criteris que millor s ajusten a cada cas, per tal d utilitzar un traductor o un altre en funció del text original, i obtenir com a resultat una traducció de qualitat considerable.

Hassani, Ghodrat “A Corpus-Based Evaluation Approach to Translation Improvement.” Meta vol. 56, n. 2 (2011).  pp. 351-373. http://id.erudit.org/iderudit/1006181ar

In professional settings translation evaluation has always been weighed down by the albatross of subjectivity to the detriment of both evaluators as clients and translators as service providers. But perhaps this burden can be lightened, through ongoing evaluator feedback and exchange that foster objectivity among the evaluators while sharpening the professional skills and recognition of the translators. The purpose of this paper is to explore the promising avenues that a corpus-based evaluation approach can possibly offer them. Using the Corpus of Contemporary American English (COCA) for evaluation purposes in a professional setting, the approach adopted for this study regards translation evaluation as a means to a worthwhile end, in a nutshell, better translations. This approach also illustrates how the unique features of the corpus can minimize subjectivity in translation evaluation; this in turn leads to translations of superior quality.

Hatim, Basil “Translating Quality Assessment.” The Translator vol. 4, n. 1 (1998).  pp. 91-100. http://www.stjerome.co.uk/periodicals/viewfile.php?id=119&type=pdf

Review of A Model for Translation Quality Assessment (Tübinger Beiträge zur Linguistik 88). Juliane House. Tübingen: Gunter Narr, 1977/1981. 344 pp. Pb. ISBN 3-87808-088-3.

Helmreich, Stephen and David Farwell “Translation Differences and Pragmatics-Based MT.” Machine translation vol. 13, n. 1 (1998).  pp.: http://ipsapp007.lwwonline.com/content/getfile/4598/4/3/fulltext.pdf

This paper examines differences between two professional translations into English of the same Spanish newspaper article. Among other explanations for these differences, such as outright errors and free variation, we find a significant number of differences are due to differing beliefs on the part of the translators about the subject matter and about what the author wished to say. Furthermore, these differences are consistent with divergent global views of the translators about the likelihood of future events (earthquakes and tidal waves) and about (rational or irrational) reactions of people to such likelihood. We discuss the requirements for a pragmatics-based model of translation that would account for these differences.

Hine-Medina, Carol “Interpreted Psychological Evaluations.” Proteus vol. 13, n. 3 (2004).  pp.: http://www.najit.org/proteus/v13n3/Vol13_No3_Rhine-Medina.PDF

Sooner or later, a judiciary interpreter is bound to come into contact with psychiatric assignments. Exposure to this facet of our judicial system may materialize in a variety of forms. One may be mass calendar calls of yellow-clad (in many counties) inmates claiming or suspected to be unfit to comprehend the charges against them or stand trial, some of whom may have requested removal to state psychiatric facilities. Judges issue rulings in individual hearings and order psychiatric examinations, referred to by section number, depending on the objective of the evaluation.

House, Juliane “Translation Quality Assessment : Linguistic Description vs Social Evaluation.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/003141ar.pdf

The paper first reports on three different approaches to translation evaluation which emanate from different concepts of ‘meaning’ and its raJe in translation. Secondly, a functional-pragmatic model of translation evaluation is described, which features a distinction between different types of translations and versions, and stresses the importance of using a ‘cultural filter’ in one particular type of translation. Thirdly, the influence of English as a worldwide lingua franca on translation processes is discussed, and finally the important distinction between linguistic analysis and social judgement in translation evaluation is introduced, and conclusions for the practice of assessing the quality of a translation are drawn.

Hovy, Eduard, Margaret King, et al. “Principles of Context-Based Machine Translation Evaluation.” Machine translation vol. 17, n. 1 (2002).  pp.: http://ipsapp009.kluweronline.com/IPS/content/ext/x/J/4598/I/17/A/3/abstract.htm #

This article defines a Framework for Machine Translation Evaluation ( FEMTI) which relates the quality model used to evaluate a machine translation system to the purpose and context of the system. Our proposal attempts to put together, into a coherent picture, previous attempts to structure a domain characterised by overall complexity and local difficulties. In this article, we first summarise these attempts, then present an overview of the ISO/IEC guidelines for software evaluation (ISO/IEC 9126 and ISO/IEC 14598). As an application of these guidelines to machine translation software, we introduce FEMTI, a framework that is made of two interrelated classifications or taxonomies. The first classification enables evaluators to define an intended context of use, while the links to the second classification generate a relevant quality model (quality characteristics and metrics) for the respective context. The second classification provides definitions of various metrics used by the community. Further on, as part of ongoing, long-term research, we explain how metrics are analyzed, first from the general point of view of “meta-evaluation”, then focusing on examples. Finally, we show how consensus towards the present framework is sought for, and how feedback from the community is taken into account in the FEMTI life-cycle.

Ji, Meng “Quantifying Phraseological Style in Two Modern Chinese Versions of Don Quijote.” Meta vol. 53, n. 4 (2008).  pp. 937-941. http://id.erudit.org/iderudit/019664ar

L’évaluation de style, ou stylométrie, est depuis toujours l’une des traditions les plus anciennes des études littéraires occidentales. Il semble, toutefois, qu’une telle méthodologie scientifique, aussi connue et de longue date soit-elle, n’ait été que rarement appliquée au domaine de la traduction, contrairement à son application aux textes littéraires originaux. Cette étude, qui porte sur l’emploi stylistique de la phraséologie de deux versions contemporaines chinoises de de Cervantes, se propose d’aborder deux problèmes actuels en matière de stylistique comparée de traduction fondée à partir de corpus, à savoir : l’absence de débat sur la question des unités linguistiques riches sémantiquement dans le cadre de l’évaluation des styles de traduction, ainsi que la nécessité d’expérimenter l’emploi de méthodes et de techniques adaptées à la statistique des corpus et ce, afin de détecter les traits stylistiques en traduction. Il est à espérer que cette étude, qui vise à élargir la structure méthodologique actuelle de la stylistique de traduction, contribuera au développement du domaine de recherche grandissant de la traductologie.Quantifying style, or stylometry, has always been one of the oldest traditions in Western literary studies. It seems, however, that such a well-explored and long-standing scientific methodology has been rarely applied to translations, as opposed to original literary texts. The present paper, which focuses on the stylistic use of phraseology in two contemporary Chinese versions of Cervantes’ shall endeavour to address the two current problems in corpus-based translation stylistics, i.e., the lack of debate on the question of semantically-rich linguistic units in quantifying style of translations, and the need for testing the use of methods and techniques adapted from corpus statistics in detecting stylistic traits in translations. It is hoped that this study, which aims at expanding the current methodological framework for translation stylistics, will help in the development of this growing area of research in Translation Studies.

Kaur, Kulwindr “Translation Accreditation Boards/Institutions in Malaysia.” The Translation Journal vol. 9, n. 4 (2005).  pp.: http://accurapid.com/journal/34malaysia.htm

Presently there are no Translation Accreditation Boards in Malaysia. The researcher was informed of this by Puan Siti Rafiah bt. Sulaiman, the Head of the Translation Section of the Malaysian National Institute of Translation (ITNMB). According to her, ITNMB is still in the process of drawing up translation programmes with the help of translator certification office-holders in America, New Zealand and Australia, i.e., the American Translators Association, New Zealand Translators Association and the Australian Translators Association. According to her, the certification office-holders of these associations will be contacted to evaluate ITNMB’s translation programmes and finally the authorities at ITNMB can have their translation courses accredited by authorities at the Malaysian Board of Accreditation or Lembaga Akreditasi Negara (LAN), which will issue the certificate of accreditation for ITNMB’s translation courses. The authorities at LAN can do this because although ITNMB reports to the government, it is registered under the Register of Companies and thus is still considered a private institution offering its own courses to the public. This has not been achieved as yet, but steps are now being taken in this direction.

Khalilov, Maxim and José Adrián Rodríguez Fonollosa “Comparación y combinación de los sistemas de traducción automática basados en n-gramas y en sintaxis.” Comparison and system combination of n-gram-based and syntax-based machine translation systems vol., n. 41 (2008).  pp. 259-266. http://rua.ua.es/dspace/bitstream/10045/8607/1/PLN_41_31.pdf

En este artículo se comparan dos sistemas basados en dos aproximaciones diferentes de traducción automática: El denominado sistema de la Traducción Automática Aumentado con Sintaxis (SAMT / TAAS), basado en una sintaxis subyacente al modelo basado en frases, y el sistema de traducción automática estadística (TAE) basado en n-gramas en el cual el proceso de traducción está basado en el modelado estocástico del contexto bilingüe. Se realiza una comparación de la arquitectura de los dos sistemas paso a paso y se comparan también los resultados en base a las medidas automáticas de evaluación de la calidad de traducción y los recursos computacionales para una pequeña tarea árabe-inglés que pertenece al dominio de noticias. Finalmente, se combinan las salidas de ambos sistemas para obtener una mejora significativa de la calidad de la traducción. (A)

Ko, Leong “Quality Control versus Quantity Control in Training NAATI Translators and Interpreters.” Translation Watch Quarterly vol. 3, n. 1 (2007).  pp.:

In 2001, the Australian Department of Immigration and Multicultural Affairs introduced a new policy that allowed translation and/or interpreting practitioners with NAATI qualifications as Translators and/or Interpreters to migrate to Australia. Since then, all NAATI-approved programs at this level have been inundated with inquiries and applications. New programs at both public and private training institutes have been approved by NAATI, with many more still likely to be developed in future. This paper looks at various issues in this area, including problems that have been identified with training, issues surrounding quality control, impact on the translation and interpreting market, the role of NAATI in overseeing the quality of training, and the future prospects for translation and interpreting training in Australia. It focuses on the training of NAATI Translators/Interpreters and mainly deals with the Chinese language, including Mandarin in the case of interpreting.

Koby, Geoffrey S. and Brian James Baer “From Professional Certification to the Translator Training Classroom: Adapting the ATA Error Marking Scale.” Translation Watch Quarterly vol. 1, n. 1 (2005).  pp.:

Evaluation of translation quality is a central issue in translation pedagogy. The use of the error marking scale developed by the American Translators Association for the grading of certification exams is discussed as a way to introduce professional standards of error marking into the translator training classroom. The problems of adapting a product-oriented and testing-oriented scale for process-oriented classroom evaluation are explored, as well as the technical details of mathematically adapting the scale to an A-F grading system. An Excel spreadsheet is used to calculate grades and adjust for length of text.

Koh, Sungryong, Jinee Maeng, et al. “Test Suite for Evaluation of English-to-Korean Machine Translation Systems.” Congreso sobre traducción automática vol., n. 8 (2001).  pp.: http://www.eamt.org/summitVIII/papers/koh.pdf

This paper describes KORTERM’s test suite and their practicability. The test-sets have been being constructed on the basis of fine-grained classification of linguistic phenomena to evaluate the technical status of English-to-Korean MT systems systematically. They consist of about 5000 test-sets and are growing. Each test-set contains an English sentence, a model Korean translation, a linguistic phenomenon category, and a yes/no question about the linguistic phenomenon. Two commercial systems were evaluated with a yes/no test of prepared questions. Total accuracy rates of the two systems were different (50% vs. 66%). In addition, a comprehension test was carried out. We found that one system was more comprehensible than the other system. These results seem to show that our test suite is practicable.

Kurz, Chistopher “Translatorisches Qualitätsmanagement als verantwortungsvolles Handeln.” Lebende Sprachen vol. 54, n. 4 (2009).  pp. 146-155.

Translatorisches Qualitätsmanagement als verantwortungsvolles Handeln. Autores: Chistopher Kurz. Localización: Lebende Sprachen, ISSN 0023-9909, Vol. 54, Nº. 4, 2009 , pags. 146-155

Kurz, Ingrid “Conference Interpreting : Quality in the Ears of the User.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/003364ar.pdf

What do the recipients of interpretation mean by ‘good interpretation’? What are the features they consider most important and what do they find irritating? Following a brief overview of user expectation surveys, the paper contends that the target audience is an essential variable in the interpretation equation. Quality of interpretation services is evaluated by users in terms of what they actually receive in relation to what they expected. Consequently, measurements of service quality that do not include user expectations i miss the point.

Lambert, José “Measuring canonization: A reply to Paola Venturi.” Target: International Journal on Translation Studies vol. 21, n. 2 (2009).  pp. 358-363. http://dx.doi.org/10.1075/target.21.2.07lam

The article, “The translator’s immobility: English modern classics in Italy”. by Paola Venturi is an interesting illustration of the insights that can be gathered by international scholars from the perspectives of functional-systemic research as exemplified ­ mainly ­ by Gideon Toury (see http://www.tau.ac.il/~toury/) and Itamar Even-Zohar ( http://www.tau.ac.il/~itamarez/). It is hardly necessary for me to stress the mutual complementarity of these two scholars’ methods: Having introduced (or re-introduced) translation into the cultural dynamics with the aid of the sociologically oriented concept of “norms”, Toury left space for Even-Zohar and to others to deal with the general fluctuations on a variety of scales of cultural value of translated communication, as one among many forms of communication. Whether the conceptual tools that these two scholars have given us are in full harmony with each other, and whether all of their implications have been fully explored is not at issue here. Due to particular circumstances in the 1970’s, their work has often been considered by translation scholars to be peculiarly relevant (only) for literary translation, though in fact the relevance of their concepts far transcends, and very explicitly so, the realms of literary scholarship (on translation). The perceived restriction to the particular sub-areas of translation studies may teach us more about the observers than about the observed.

Langlais, Philippe and Guy Lapalme “Trans Type: Development-Evaluation Cycles to Boost Translator’s Productivity.” Machine Translation vol. 17, n. 2 (2002).  pp. 77-98. http://dx.doi.org/10.1023/B:COAT.0000010117.98933.a0

Abstract We present TransType: a new approach to Machine-Aided Translation in which the human translator maintains control of the translation process while being helped by real-time completions proposed by a statistical translation engine. The TransType approach is first presented through a series of prototypes that illustrate their underlying translation model and graphical interface. The results of two rounds of in situ evaluation of TransType prototypes are discussed followed by a set of lessons learned in these experiments. It will be shown that this approach is valued by translators but given the short time allotted for the evaluation, translators were not able to quantitatively increase their productivity. TransType is compared with other approaches and new perspectives are elaborated for a new version being developed in the context of a Fifth Framework European Community Project.

Larose, Robert “Méthodologie de l’évaluation des traductions.” Meta vol. 43, n. 2 (1998).  pp.: http://www.erudit.org/revue/meta/1998/v43/n2/003410ar.pdf

This article addresses the problems involved in evaluating translated texts. It covers four parameters for evaluation, looks at criteria used in various organizations and concludes with general considerations for ‘fair’ evaluation of texts.

Lauscher, Susanne “Transiation Quality Assessment : Whe re Can Theory and Practice Meet?” The Translator vol. 6, n. 2 (2000).  pp.: http://www.stjerome.co.uk/periodicals/viewfile.php?id=189&type=pdf

Despite increused interest within translations studies to provide orientation for translation quulity ussessment (TQA), academic efforts in this area are still largely ignored, if not explicitly rejected by the profession. The purpose of this paper is to investigate why scientific models for evaluating translations are difficult to apply and to outline a number of ways in which the gap between theoretical approaches and practical needs may be negotiated.

Lee-Jahnke, Hannelore “Aspects pédagogiques de l’évaluation des traductions.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/003447ar.pdf

Partant de I’adage pédagogique qu’on ne saurait bien faire que ce dont on comprend parfaitement I’objectif, notre propos est de montrer des approches novatrices dans les trois domaines suivants: ? l. Différentes méthodes pour sensibiliser les étudiants I’évaluation en général ; 2. L:évaluation « formative» telle qu’elle est pratiquée dans nos cours; 3. Projet sur une évaluation «sommative ».

Leppihalme, Ritva “The Two Faces of Standardization: On the Transíation of Regionalisms in Literaty Dialogue.” The Translator vol. 6, n. 2 (2000).  pp.: http://www.stjerome.co.uk/periodicals/viewfile.php?id=194&type=pdf

Non-standard language varieties such as dialect and sociolect are known to present serious problems for translators. The function(s) they serve in the source text can be weakened or lost in translation because there may well be no target-language variety with sufficiently similar situational characteristics. On the other hand, the common strategy ofrendering non-standard sourcev language dialogue by standard target-language dialogue can lead to loss of the linguistic identity of the work and its author. This paper examines standardization through the English translation of one of the Finnish author Kalle Päätalo’s early novels. It suggests that standardization is not necessarily only negative in its results, as target readers may be more interested in other aspects of the target text than its linguistic identity.

Lewis, Amber L. “The Practical Implications of a Minimum Machine Translation Unit.” Babel: Revue internationale de la traduction/International Journal of Translation vol. 43, n. 2 (1997).  pp.:

A great deal of speculation dominates the translation industry with regard to the effectiveness of (MT) Machine Translation, or translation software. This project investigates the conclusions of Bennet (1994) about the size of the UT (unit of translation), based on the raw translations of a sample text as produced by four competitive PC programs. These programs are all transfer systems, which employ a minimum UT, such as a single noun phrase. The sample text is an authentic business correspondence text. A linguistic analysis of the four translations is performed. Results of the analysis show that numerous errors are committed which require the intervention of the professional translator. This research concludes that, for this type of text, a transfer system is not cost-effective because it will still require extensive human editing. The semantic errors particularly demonstrate the need to emphasize research towards the development of translation software which incorporates a larger UT.

Low, G. “Evaluating translations of surrealist poetry: Adding note-down protocols to close reading.” Target: International Journal on Translation Studies vol. 14, n. 2 (2003).  pp.: http://ejournals.ebsco.com/direct.asp?ArticleID=J2H0P8B3PT4NT14395JH

Evaluating translations of poetry will always be difficult. The paper focuses on the problems posed by French surrealist poetry, where the reader was held to be as important as the writer in creating interpretations, and argues that evaluations involving these poems inevitably require reader-response data. The paper explores empirically, in the context of André Breton’s ‘L’Union libre’, whether a modification of Think-Aloud procedure, called Note-Down, applied both to the original text and to three English translations, can contribute useful information to a traditional close reading approach. The results suggest that comparative Note-Down protocols permit simple cost-benefit analyses and allow one to track phenomena, like the persistence of an effect through the text, which might be hard to obtain by other methods.

Martínez Melis, Nicole “Evaluation et didactique de la traduction: le cas de la traduction dans la langue étrangère.” Tesis Doctorals en Xarxa (TDX) vol., n. (2002).  pp.: http://www.tdx.cbuc.es/TDX-1116101-145109/index.html

Cette thèse qui se situe dans la branche appliquée de la traductologie propose des procédures, des tâches et des critères pour l’évaluation – dans sa fonction sommative – de la compétence de traduction de l’étudiant dans le cadre de la didactique de la traduction dans la langue étrangère. Elle étudie l’évaluation en tant qu’objet de recherche des sciences de l’éducation, en explique l’évolution, les modèles et les notions-clés. Elle aborde l’évaluation en traduction, délimite trois domaines de l’évaluation en traduction – évaluation des traductions des textes littéraires et sacrés, évaluation dans l’activité professionnelle de la traduction, évaluation dans la didactique de la traduction – et dégage la spécificité de chacun de ces domaines ainsi que les apects qu’ils ont en commun. Texto completo: Parte 1: http://www.tdx.cbuc.es/TESIS_UAB/AVAILABLE/TDX-1116101-145109//nmm1de2.pdf . Parte 2: http://www.tdx.cbuc.es/TESIS_UAB/AVAILABLE/TDX-1116101-145109//nmm2de2.pdf

Martinez Melis, Nicole and Amparo Hurtado “Assessment in Translation Studies : Research Needs.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/003624ar.pdf

On the whole, most research into assessment in translation only concentrates on one area-evaluation of translations of literary and sacred texts – and other areas are ignored. In fact, this field of research includes two other areas, each with its own characteristics: assessment of professionals at work and assessment of trainee translators. Starting with this presupposition, we describe the three areas and analyze the notion of translation assessment, so as to define the characteristics of each area: objects, types, functions, aims and means of assessment. Next, we discuss the question of translation competence, and the concepts of translation problems and translation errors, in arder to reach a general principie that should be applied in al! assessment. Finally, we suggest assessment instruments to be used in teaching translation and make suggestions for research in assessing translator training, an area that has long been neglected and deserves serious attention.

Mayor, Aingeru, Iñaki Alegria, et al. “Evaluación de un sistema de traducción automática basado en reglas o por qué BLEU sólo sirve para lo que sirve.” Evaluation of a Rule-Based Machine Tranlation system or why BLEU is only useful for what it is meant to be used vol., n. 43 (2009).  pp. 197-205. http://www.sepln.org/revistaSEPLN/revista/43/articulos/art22.pdf

Matxin es un sistema de traducción automática basado en reglas que traduce a euskera. Para su evaluación hemos usado la métrica HTER que calcula el coste de postedición, concluyendo que un editor necesitaría cambiar 4 de cada 10 palabras para corregir la salida del sistema. La calidad de las traducciones del sistema Matxin ha podido ser comparada con las de un sistema basado en corpus, obteniendo el segundo unos resultados significativamente peores. Debido al uso generalizado de BLEU, hemos querido estudiar los resultados BLEU conseguidos por ambos sistemas, constatando que esta métrica no es efectiva ni para medir la calidad absoluta de un sistema, ni para comparar sistemas que usan estrategias diferentes. (A)

Mayor Serrano, María Blanca “Necesidades terminológicas del traductor de productos sanitarios evaluación de recursos (EN, ES).” Panace@: Revista de Medicina, Lenguaje y Traducción vol. 10, n. 31 (2010).  pp. 10-15. http://dialnet.unirioja.es/servlet/extart?codigo=3257681

A pesar de que la traducción de textos sobre productos sanitarios es sumamente compleja y requiere diversos tipos de conocimiento especializado, los investigadores apenas le han prestado atención desde un punto de vista terminológico, lo que da lugar a una significativa falta de productos terminográficos útiles para el traductor. Especialmente, se han pasado por alto las necesidades de los traductores, sobre las que ha de asentarse la metodología para la elaboración de tales productos.En este artículo presento una selección de recursos y analizo si atienden a las necesidades terminológicas de los traductores. Para su elaboración me han sido de gran utilidad los resultados obtenidos de una encuesta realizada en dos listas de debate sobre traducción médica (Tremédica y MedTrad).

O+Ógé¼Gäóbrien, Sharon “Methodologies for Measuring the Correlations between Post-Editing Effort and Machine Translatability.” Machine Translation vol. 19, n. 1 (2005).  pp. 37-58. http://dx.doi.org/10.1007/s10590-005-2467-1

Abstract Against the background of a wider research project that aims to investigate the correlation, if any, between post-editing effort and the presence of negative translatability indicators in source texts submitted to Machine Translation (MT), this paper sets out to assess the potential of two methods for measuring the effort involved in post-editing MT output. The first is based on the use of the keyboard-monitoring program Translog; the second on Choice Network Analysis (CNA). The paper reviews relevant research in both machine translatability and MT post-editing, and appraises, in particular, the suitability of think-aloud protocols in assessing post-editing effort. The combined use of Translog and CNA is proposed as a way of overcoming some of the difficulties presented by the use of think-aloud protocols in the current context. Initial results from a study conducted at Dublin City University confirm that triangulating data from Translog and CNA can cast light on the temporal, cognitive and technical aspects of post-editing effort.

Orozco, M. and A. H. Albir “Measuring Translation Competence Acquisition.” Meta vol. 47, n. 3 (2002).  pp.:

Measuring Translation Competence Acquisition

Orsted, Jeannette “Quality and Efficiency : Incompatible Elements in Translation Practice.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/003766ar.pdf

The aim of this article is to describe the quality assessment procedures in a large, national translation company. The company is more than ten years old, but the past five years’ growth rates have been rapidly increasing. The growth in turnover can be attributed both to a high degree of customer loyalty based on a high leve! of efficiency and trust, and on high, well-defined and transparent quality standards. The company is based on the idea that translators should function in a working environment based on full-term employment. Consequently the increase in turnover has involved recruiting a large number of translators and support services in the IT -department. This is why quality assessment procedures are no longer an individual responsibility, but have become a corporate issue. Quality procedures must therefore be part of the daily routines and involve all aspects of the business. To understand the conditions of the translation market today, the author provides an overview of the market based on the ASSIM-study and information on the new economy. After that she presents the case ofTranslation House of Scandinavia and finally she discusses some of the possible quality assurance systems that are available today and are used by the translation industry.

Ortín, Marcel “EIs Dickens de Josep Carner i eIs seus crítics.” Quaderns vol. 7, n. (2002).  pp.: http://ddd.uab.es/search.py?&cc=quaderns&f=issue&p=11385790n7&rg=100&sf=fpage&so=a&as=0&sc=0&ln=ca

Entre els anys 1928 i 1931, en plena maduresa literma, Josep Camer va ocupar-se en la tradució de tres de les novel.les roajors de Dickens: Pickwick Papers, David Copperfield i Great Expectations. Totes tres anaven destinades a la Biblioteca «A tot vent», la col.lecció de novela amb que van estrenar-se les Edicions Proa. Camer hi va portar una reflexió, sobre els requeriments de la llengua literma i sobre les virtualitats de l’ art de traduir, en la qual havia anat aprofundint alllarg de trenta anys d’exercici. Els resultats que va obtenir amb Dickens cal analitzarlos a la llum d’aquesta reflexió, que pot donar raó de moltes solucions concretes. Des d’aquí és possible resseguir la controversia recent sobre la qualitat real de les traduccions, i començar a plantejar el difícil problema de l’ avaluació en l’ ambit de la traducció literaria.

Owczarzak, Karolina, Josef Van Genabith, et al. “Evaluating machine translation with LFG dependencies.” Machine Translation vol. 21, n. 2 (2007).  pp. 95-119. http://dx.doi.org/10.1007/s10590-008-9038-1

Abstract In this paper we show how labelled dependencies produced by a Lexical-Functional Grammar parser can be used in Machine Translation evaluation. In contrast to most popular evaluation metrics based on surface string comparison, our dependency-based method does not unfairly penalize perfectly valid syntactic variations in the translation, shows less bias towards statistical models, and the addition of WordNet provides a way to accommodate lexical differences. In comparison with other metrics on a Chinese+óGé¼GÇ£English newswire text, our method obtains high correlation with human scores, both on a segment and system level.

Paegelow, Richard S. “Ten Reasons Why Good Translations Sometimes Fail.” Translorial-Online vol., n. (1998).  pp.: http://www.ncta.org/displaycommon.cfm?an=1&subarticlenbr=24

Ten Reasons Why Good Translations Sometimes Fail

Pinto Molina, María “Quality Factors in Documentary Translation.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/003840ar.pdf

Well aware of the difficulties involved in integrating translating models and quality systems, we offer an overview of relevant developments in the field. Particular emphasis is placed on the pragmatic connotations of translation and on the methodological aspects of the Quality Paradigm, an approach to documentary translation that focuses activity on the target user.

Pius Ten, Hacken “Has There Been a Revolution in Machine Translation?” Machine translation vol. 16, n. 1 (2001).  pp.: http://ipsapp009.lwwonline.com/content/getfile/4598/13/1/abstract.htm

When we compare the contributions on MT in the proceedings of Coling 1988 and Coling-ACL 1998, it seems obvious that in the period between them a revolution has taken place. Often this intuition is formulated as the replacement of linguistic approaches by statistical approaches. On closer inspection, however, this position cannot be defended. An analysis of Rosetta, concentrating on the different levels of discussion and of underlying assumptions, shows that the choice of knowledge from linguistic theories or information theory and corpora is by itself not a decisive issue. More important is the question of how the problem to be solved by an MT system is defined. An analysis of the decisions underlying Verbmobil, resulting in a list corresponding point by point to the one for Rosetta, shows how far-reaching the new approach to defining the problem of MT is. As it is shown that these systems are representative of the work in MT as it was done ten years ago and today, it can reasonably be argued that a revolution in MT has taken place, though not in exactly the way it is often believed.

Pöchhacker, Franz “Quality Assessment in Conference and Community Interpreting.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/003847ar.pdf

On the assumption that interpreting can and should be viewed within a conceptual spectrum from international to intra-social spheres of interaction, and that high standards of quality need to be ensured in any ofits professional domains, the paper surveys the state Iof the art in interpreting studies in search of conceptual and methodological tools for the empirical study and assessment of quality. 8ased on a selective review of research approaches and findings for various aspects of quality and types of interpreting, it is argued that there is enough common ground to hope for some cross-fertilization between research on quality assessment in different areas along the typological spectrum of inter. preting activity.

Robinson, Bryan “‘Las ruinas circulares’ de Jorge Luis Borges.” Sendebar. Boletín de la Facultad de Traductores e Interpretes de Granada vol., n. 13 (2002).  pp.:

La traducción de un texto literario escrito por un autor tan consciente del discurso como lo era çBorges, requiere por parte del traductor un análisis especialmente riguroso de ese texto y de los procesos implicados en su lectura. La traducción de ‘Las ruinas circulares’ por James Irby (Yates & Irby 1970) es una lograda versión del original centrada en el texto, aunque su decisión de no hacer una traducción centrada en el lector es contraria al propósito comunicativo perseguido por Borges en el original. Esto se pone de manifiesto gracias al uso de herramientas proporcionadas por el análisis del discurso en la realización de este ejercicio de crítica de traducción. (A.)

Rosenmund, Alain “Konstruktive Evaluation : Versuch eines Evaluationskonzepts für den Unterricht.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/003987ar.pdf

AII translations should be assessed with regard to their context and aim. Therefore, and in order to objectivize the assessment as well as to prepare the students for the professional environment, the assessment of students’ translations should be based on specifications which have been worked out by the professor or lecturer and students beforehand.

Ruiz Rosendo, Lucía “La evaluación de Ia calidad en interpretación desde la perspectiva del usuario. Los congresos de medicina.” Sendebar. Boletín de la Facultad de Traductores e Interpretes de Granada vol., n. 16 (2005).  pp.:

Este artículo tiene como objetivo principal describir el estado de la calidad en interpretación, centrándonos en los congresos de medicina, Para ello, hemos dividido el trabajo que nos ocupa en distintos apartados, tres apartados más generales y un apartado específico de la interpretación en congresos de medicina: (1) análisis de las definiciones existentes del concepto de calidad; (2) estudio de los parámetros más relevantes que influyen y condicionan la calidad de la interpretación; (3) breve análisis de los estudios experimentales o empíricos pioneros realizados en el ámbito de la calidad desde la perspectiva del usuario y (4) análisis de los estudios de calidad realizados en el ámbito de la medicina desde la perspectiva del usuario. Por último, hemos incluido un apartado en el que exponemos brevemente los resultados de un estudio empírico que hemos realizado entre intérpretes especializados en congresos de medicina, centrándonos en los resultados sobre los criterios de evaluación de la calidad en congresos médicos

Selvaggini, Luisa and Alessandro Finzi “Analisi della correlazione tra giudizio estetico e valutazione di fedeltà all’originale in traduzioni dallo spagnolo.” Scrittura e riscrittura. Traduzioni, refundiciones, parodie e plagi: Atti del Convegno di Roma [Associazione Ispanisti Italiani] vol., n. (1995).  pp. 131-140. http://dialnet.unirioja.es/servlet/extart?codigo=2349302

Scrittura e riscrittura. Traduzioni, refundiciones, parodie e plagi: Atti del Convegno di Roma [Associazione Ispanisti Italiani], 1995, ISBN 88-7119-761-5,

Shashok, Karen “La calidad en el Servicio de Traducción de la Comisión Europea.” Panacea : boletín de medicina y traducción vol. 5, n. 16 (2004).  pp.: http://www.medtrad.org/panacea/PanaceaAnteriores.htm

Por gentileza de los organizadores, dos medtraderos pudimos asistir a la conferencia de Emma Wagner titulada «The Quest for Translation Quality in International Organizations» durante las IV Jornadas sobre la Formación y la Profesión del Traductor e Intérprete, organizadas por la Universidad Europea de Madrid (España; véase al respecto, en las páginas 183-186 de este número de Panace@, el artículo de Cáceres Würsig, Pérez González y Strotmann). Wagner trabajó para la Comisión Europea (CE) durante treinta años como traductora, correctora y directora del Servicio de Traducción (SdT), y ha destacado por su actitud crítica frente al lenguaje burocrático, opaco y recargado, uno de los grandes obstáculos para la buena traducción.

Sherwin, Ann C. “Buzzword or Bonanza? A Translator Reflects on Best Practice.” The Translation Journal vol. 10, n. 2 (2005).  pp.: http://accurapid.com/journal/

There’s no doubt that ‘best practice’ is a hot topic today. The exact phrase brings nearly 40 million hits with Google, including 16 sponsored links related to sales and marketing, education, research, manufacturing, information science, health care, and more. Amazon.com lists over 2300 books with ‘best practice’ as a keyword. To me it was pretty much just a buzzword. It sounded good, and I assumed it was an apt description of the way I ran my business

St. Andre, James “Between Tongues. Translation and/of/in Performance in Asia.” Target: International Journal on Translation Studies vol. 21, n. 2 (2009).  pp. 403-405. http://www.ingentaconnect.com/content/jbp/targ/2009/00000021/00000002/art00016

Jennifer Lindsay, ed. Between Tongues. Translation and/of/in Performance in Asia. Singapore: Singapore University Press, 2006. xvi + 302 pp. ISBN 9971-69-339-9. 28 USD. Reviewed by James St. André (Manchester)

Steiner, Erich “A Register-Based Translation Evaluation: An Advertisement as a Case in Point.” Target: International Journal on Translation Studies vol. 10, n. 2 (1998).  pp.:

Se estudian los elementos que no deben faltar en una evaluación de traducciones basada en el análisis del registro utilizado. En la primera sección se aboga por un enfoque eminentemente teórico en lo que respecta a la evaluación de traducciones, aunque también se tiene en cuenta el ámbito más general de la lingüística. En las secciones 2, 3 y 4 se analizan los aspectos concretos del campo, el tenor y el modo, mientras que en la 5 se expone que para evaluar una traducción también será necesario acudir a la lingüística comparativa y a las tipologías textuales. Por último, se insiste en que este tipo de evaluación acerca a la traducción y a la cogeneración, con lo que resulta posible establecer vínculos entre la calidad de las traducciones y la de otros textos en general.

Valero Garcés, Carmen “Cómo evaluar la competencia traductora. Varias propuestas.” Congrés Internacional sobre Traducció vol., n. 2 (1994).  pp.: http://ddd.uab.es/pub/traduccio/Actes4.pdf

El concepto de “buen traductor” es inherente a cualquier discusión en el campo de los estudios de traducción. Los formadores de traductores deben creer en ciertas características implícitas que tipifican a dicho profesional, de acuerdo con las cuales diseñan sus programas, seleccionan tipos de textos y materiales y aplican los procedimientos evaluativos que consideran apropiados. La primera pregunta que surge es qué debe saber y qué destrezas debe desarrollar y dominar el futuro traductor para poder traducir. De este modo se plantea el debate sobre la competencia del traductor y el modo de adquirir dicha cualidad.

Vanderschelden, Isabelle “Quality Assessment and Literary Transiation in France.” The Translator vol. 6, n. 2 (2000).  pp.: http://www.stjerome.co.uk/periodicals/viewfile.php?id=195&type=pdf

This article examines the current place of literary translation in the French literary polysystern. By considering the perspectives of various parties, such as publishers, literary translators and book reviewers, its objective is to survey the impact of trunslated literature in France and to explore the visibility of the literary translator and of translated literature. More specifically, the article raises the issue of quality assessment of translations published in France and analyzes sorne of the criteria applied both explicitly and implicitly when literary texts in translation are evaluated. The arguments developed here are based mainly on information collected about French publishers and literary translators from interviews or other accounts, and also on recent reviews of translated literature published in the French press.

Varela Salinas, María-José and Encarnación Postigo Pinazo “La evaluación en los estudios de traducción.” The Translation Journal vol. 9, n. 1 (2005).  pp.: http://accurapid.com/journal/31evaluacion.htm

Mejorar las posibilidades de evaluación en el campo de la traducción supone uno de los retos más importantes, ya que la evaluación del rendimiento académico es imprescindible, tanto por su imposición institucional como por la misma naturaleza de la actividad académica. Los problemas que habitualmente se le plantean al docente son diversos. Uno de los mayores es el de la subjetividad y la percepción personal tanto del evaluador como del evaluado a la hora de valorar el resultado de un proceso de enseñanza-aprendizaje, para el que en el campo de la traducción aún no hay suficientes criterios sistematizados. Esto reside, en parte, en una falta de conciencia de qué es lo que se tiene y se puede enseñar en las clases de traducción y, por tanto, de lo que es lo evaluable en una prueba académica de traducción (Goff-Kfouri, 2004).

Verdegal, Joan “Los neologismos literarios y sus efectos en traducción: Propuesta analítico-evaluadora de la distorsión (contexto francés-español/francés-catalán).” Sendebar. Boletín de la Facultad de Traductores e Interpretes de Granada vol., n. 13 (2002).  pp.:

Vidal, Mirta “NAJIT Certification on the Way.” Proteus vol. 9, n. 3 (2000).  pp.: http://www.najit.org/proteus/v9n3/vidal_v9n3.htm

Some of you may think NAJIT’s efforts to create a certification program for judiciary interpreters has been a long time coming. Actually, most of you were not even members when the idea began to be seriously considered. I remember sitting with Dagoberto Orrantia and Janis Palma, who was then chair, in a restaurant in San Juan nine years ago having a heated argument about whether or not we should have an exam, what kind of an exam, and how it could be done. And that was only the first of many heated arguments-as Cristina will remember because the importance of the issue makes people very passionate about the subject.

Viola Rodrigues, Sara “Translation quality: a Housian analysis.” Meta vol. 41, n. 2 (1996).  pp.: http://www.erudit.org/revue/meta/1996/v41/n2/003969ar.pdf

Se analiza el modelo de evaluación de traducciones ideado en 1981 por Juliane House. A pesar de que se ha quedado un poco anticuado, es el que mejor ha funcionado hasta el momento y supone una gran evolución con respecto a los que existían anteriormente.

Waddington, Christopher “Different Methods of Evaluating Student Translations : The Question of Validity.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/004583ar.pdf

This article examines the criterion-related validity of the results obtained by the application of four different methods of assessment to the correction of a second-year exam of translation into the foreign language (Spanish-English) done by 64 university students. These four methods are based on types currently used by university teachers, and the validation study is based on 17 external criteria taken from six different sources. In spite of this variety, a factor analysis reveals the presence of one main factor which is clearly identifiable as Trans/ation Competence. The hypotheses regarding differences between the validity of the methods are verified as null, since all the systems, whether based on error analysis or a holistic approach, prove to correlate significantly with this main factor.

Waddington, C. “Measuring the effect of errors on translation quality.” Lebende Sprachen vol. 51, n. 2 (2006).  pp.:

Measuring the effect of errors on translation quality

Way, Andy and Nano Gough “Controlled Translation in an Example-based Environment: What do Automatic Evaluation Metrics Tell Us?” Machine Translation vol. 19, n. 1 (2005).  pp. 1-36. http://dx.doi.org/10.1007/s10590-005-1403-8

Abstract This paper presents an extended, harmonised account of our previous work on integrating controlled language data in an Example-based Machine Translation system. Gough and Way in MT Summit pp. 133+óGé¼GÇ£140 (2003) focused on controlling the output text in a novel manner, while Gough and Way (9th Workshop of the EAMT, (2004a), pp. 73+óGé¼GÇ£81) sought to constrain the input strings according to controlled language specifications. Our original sub-sentential alignment algorithm could deal only with 1:1 matches, but subsequent refinements enabled n:m alignments to be captured. A direct consequence was that we were able to populate the system+óGé¼Gäós databases with more than six times as many potentially useful fragments. Together with two simple novel improvements +óGé¼GÇ£ correcting a small number of mistranslations in the lexicon, and allowing multiple translations in the lexicon +óGé¼GÇ£ translation quality improves considerably. We provide detailed automatic and human evaluations of a number of experiments carried out to test the quality of the system. We observe that our system outperforms the rule-based on-line system Logomedia on a range of automatic evaluation metrics, and that the +óGé¼-£best+óGé¼Gäó translation candidate is consistently highly ranked by our system. Finally, we note in a number of tests that the BLEU metric gives objectively different results than other automatic evaluation metrics and a manual evaluation. Despite these conflicting results, we observe a preference for controlling the source data rather than the target translations.

Williams, Malcolm “The Application of Argumentation Theory to Translation Quality Assessment.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/004605ar.pdf

Translation quality assessment (TQA) models mar be divided into two main types: (1) models with a quantitative dimension, such as SEPT (1979) and Sical (1986), and (2) non-quantitative, textological models, such as Nord (1991) and House (1997). Because it tends to focus on microtextual (sampling, subsentence) analysis and error counts, Type 1 suffers from some major shortcomings. First, because of time constraints, it cannot assess, except on the basis of statistical probabilities, the acceptability of the content of the translation as a whole. Second, the microtextual analysis inevitably hinders any serious assessment of the content macrostructure of the translation. Third, the establishment of an acceptability threshold based on a specific number of errors is vulnerable to criticism both theoretically and in the marketplace. Type 2 cannot offer a cogent acceptability threshold either, precisely because it does not propose error weighting and quantification for individual texts. What is needed is an approach that combines the quantitative and textological dimensions, along the lines proposed by Bensoussan and Rosenhouse (1990) and Larose (1987, 1998). This article outlines a project aimed at making further progress in this direction through the application of argumentation theory to instrumental translations

Yoshimi, Takehiko “Improvement of Translation Quality of English Newspaper Headlines by Automatic Pre-editing.” Machine translation vol. 16, n. 4 (2001).  pp.: http://www.springerlink.com/media/n0fprkwvtn4cc2nhxnby/contributions/r/3/5/1/r35132023577045u.pdf

Since the headlines of English news articles have a characteristic style, different from the styles which prevail in ordinary sentences, it is difficult for MT systems to generate high-quality translation for headlines. We try to solve this problem by adding to an existing system a pre-editing module which rewrites headlines as ordinary expressions. Rewriting of headlines makes it possible to generate better translations which would not otherwise be generated, with little or no changes to the existing parts of the system. Focusing on the absence of a form of the verb be as a missing part of normal English, we have described rewriting rules for putting properly the verb be into headlines, based on information obtained by morpho-lexical and rough syntactic analysis. We have incorporated the proposed method into our English–Japanese MT system, and carried out an experiment with 312 headlines as unknown data. Our method has satisfactorily marked 81.2% recall and 92.0% precision.

Young-Suk, Lee, Daniel J. Sinder, et al. “Interlingua-based English–Korean Two-way Speech Translation of Doctor–Patient Dialogues with CCLINC.” Machine translation vol. 17, n. 3 (2002).  pp.: http://ipsapp009.kluweronline.com/IPS/content/ext/x/J/4598/I/20/A/1/abstract.htm #

Development of a robust two-way real-time speech translation system exposes researchers and system developers to various challenges of machine translation (MT) and spoken language dialogues. The need for communicating in at least two different languages poses problems not present for a monolingual spoken language dialogue system, where no MT engine is embedded within the process flow. Integration of various component modules for real-time operation poses challenges not present for text translation. In this paper, we present the CCLINC (Common Coalition Language System at Lincoln Laboratory) English–Korean two-way speech translation system prototype trained on doctor–patient dialogues, which integrates various techniques to tackle the challenges of automatic real-time speech translation. Key features of the system include (i) language–independent meaning representation which preserves the hierarchical predicate–argument structure of an input utterance, providing a powerful mechanism for discourse understanding of utterances originating from different languages, word-sense disambiguation and generation of various word orders of many languages, (ii) adoption of the DARPA Communicator architecture, a plug-and-play distributed system architecture which facilitates integration of component modules and system operation in real time, and (iii) automatic acquisition of grammar rules and lexicons for easy porting of the system to different languages and domains. We describe these features in detail and present experimental results.

Yuen Wan, Ngan and Kong Wai Ping “The Effectiveness of Electronic Dictionaries as a Tool for Translators.” Babel: Revue internationale de la traduction/International Journal of Translation vol. 43, n. 2 (1997).  pp.:

In view of the growing popularity of electronic dictionaries, the Consumer Council of Hong Kong, a statutory body financed by annual subvention from the Government of Hong Kong to protect and promote the interests of the consumers of goods and services, conducted a survey to evaluate the effectiveness of the various functions of 15 models of electronic dictionaries available in 1994. The authors, who served as consultants in all language-related aspects of this survey, will evaluate the usefulness of these dictionaries to translators on the basis of the survey findings. Their vocabulary database in the realms of difficult, modern, and scientific and technical words as well as phrases will be explored in that lists of words and phrases are meticulously compiled before the words and phrases are checked in the dictionaries. Moreover, as two electronic dictionaries claim that they could translate English sentences into Chinese, different types of sentences are tested to see whether or not they are able to produce satisfactory translations.

Zequan, Liu “Translation Quality Assessment.” The Translation Journal vol. 7, n. 3 (2003).  pp.: http://accurapid.com/journal/25register.htm

Register, or context of situation as it is formally termed, ‘is the set of meanings, the configuration of semantic patterns, that are typically drawn upon under the specific conditions, along with the words and structures that are used in the realization of these meanings’ (Halliday, 1978:23). It is concerned with the variables of field, tenor, and mode, and is a useful abstraction which relates variations of language use to variations of social context. Therefore, register analysis of linguistic texts, which enables us to uncover how language is manoeuvred to make meaning, has received popular application in (critical) discourse analysis and (foreign) language teaching pedagogyMonográfico
Evaluación de la traducción
I nfo T rad 16 de mayo de 2012



Adewuni, Salawu “Evaluation of interpretation during congregational services and public religious retreats in south-west Nigeria.” Babel: Revue internationale de la traduction/International Journal of Translation vol. 56, n. 2 (2010).  pp. 129-138. http://ejournals.ebsco.com/direct.asp?ArticleID=498294ACF09DAE8130A5

In most spiritual gatherings in Southwest Nigeria, as observed today, preaching is in English or in Yoruba and then interpreted in Yoruba or English. English is an official language in Nigeria and Yoruba is the local language in most of the Southwest of the country. Most people are to some extent bilingual. The objective of this study is to evaluate the quality of the interpretation carried out in those spiritual gatherings. Questionnaires were administered. Data were collated and analyzed. A total of 39 respondents (78%) were satisfied with the output of the interpretation from English to Yoruba while only 48% were satisfied with the interpretation from Yoruba to English. The study concludes that interpretation from English to Yoruba is being handled better and more training be given to those interpreting from Yoruba to English.

Amigó, Enrique Gimenez Jesús and Felisa Verdejo “Procesamiento Lingüistico en métricas de evaluación automática de traducciones.” Procesamiento Lingüistico en métricas de evaluación automática de traducciones vol., n. 43 (2009).  pp. 215-222. http://www.sepln.org/revistaSEPLN/revista/43/articulos/art24.pdf

A pesar de los esfuerzos por incluir procesamiento lingüístico en las métricas de evaluación automática de sistemas de traducción, las más usadas siguen siendo métricas basadas en solapamiento léxico. Esto se debe a que no se han clarificado aún las ventajas del uso de técnicas lingüísticas en este contexto. En este artículo se analiza en profundidad las ventajas de aplicar procesamiento lingüístico a nivel sintáctico y semántico en la evaluación automática de traducciones. (A)

Angelelli, Claudia V. “Validating professional standards and codes: Challenges and opportunities.” Interpreting vol. 8, n. 2 (2006).  pp.:

This article presents a focus group study on the validation of the California Standards for Healthcare Interpreters produced by the California Healthcare Interpreting Association (CHIA) in 2002. The reactions of healthcare interpreters to the Standards, and their opinions and thoughts on its provisions are reviewed and analyzed. The article first addresses the issues and problems healthcare interpreters encounter when implementing the Standards, and highlights the challenges they face when trying to balance their professional mandate with the reality of their working environment. In particular, it describes the difficulties of defining the interpreter’s role in the system. The final section of the article draws attention to the need for bridges between research and practice as a means of guaranteeing that the field of interpreting will continue to develop.

Antia, Bassey E. “Competence and quality in the translation of specialized texts: investigating the role of terminology resources.” Quaderns vol. 6, n. (2001).  pp.: http://ddd.uab.es/search.py?&cc=quaderns&f=issue&p=11385790n6&rg=100&sf=fpage&so=a&as=0&sc=0&ln=ca

The experiment reported here is part of a broader study (Antia, in press). Due to space constraints, the present discussion omits a number of relevant issues, which can however be found in chapter 3 of the broader study. Cognizant of this forum on empirical-experimental research in translation, the current discussion addresses certain issues that were not of primary concern in the main study.

Arevalillo Doval, Juan José “A propósito de la norma europea de calidad para los servicios de traducción.” El español, lengua de traducción vol., n. 2 (2004).  pp.:

El mundo de la traducción ha sufrido una indudable revolución en los últimos años, fomentada en gran medida por la aplicación de la informática a los usos diarios del traductor. De hecho, en un período de tiempo relativamente corto, el traductor ha pasado de trabajar con pluma y máquina de escribir a manejar los procesadores de texto más complejos del mercado. Tanto es así, que incluso el dictáfono, hasta hace poco uno de los dispositivos favoritos de los traductores, se ha visto relegado por los programas informáticos que permiten al traductor dictarle al ordenador la traducción a la vista para que éste transcriba el texto en la pantalla con un sorprendente índice de acierto. No cabe ninguna duda de que la informática ha sacado al traductor de su legendario aislamiento y ha abierto ante él una larga serie de recursos de todo tipo que facilitan su tarea hasta un punto impensable hasta hace bien poco y que ayudan a superar las antiguas barreras del tiempo y la distancia. Por otro lado, Internet, los programas de traducción asistida, los procesadores de texto, los utensilios terminológicos y otros programas compartidos con otros sectores laborales, han disparado la productividad del traductor actual.

Arevalillo, Juan José “Componentes principales de un programa informático I.” La linterna del traductor vol., n. 8 (2004).  pp.: http://traduccion.rediris.es/loca2.htm

En lo que se refiere a la localización, cuatro son los componentes que nos interesan desde el punto de vista del traductor: interfaz de usuario, ayuda en línea, documentación impresa y material complementario. Quedan excluidos de este artículo los procesos complementarios no relativos a la traducción. A continuación se explican las características principales de cada uno de ellos.

Arevalillo, Juan José “Componentes principales de un programa informático I I.” La linterna del traductor vol., n. 8 (2004).  pp.: http://traduccion.rediris.es/loca3.htm

Posiblemente sea el componente más voluminoso en cuanto a número de palabras. Antes competía con la documentación impresa a este respecto, pero la tendencia actual apunta a elaborar textos de ayuda completos con numerosos vínculos internos, en detrimento de la documentación impresa que va viendo cómo se reduce el número de manuales, con los consiguientes ahorros en el coste de producción. Este componente no llega al grado técnico de la interfaz de usuario, pero sí requiere ciertos conocimientos, ya que los textos traducidos deben compilarse también para conseguir los archivos de ayuda a los que puede acceder el usuario para su consulta. Existen dos tipos principales de ayuda: WinHelp y HTML.

Arevalillo, Juan José “Presencia de la localización en el mercado y su formación específica.” La linterna del traductor vol., n. 8 (2004).  pp.: http://traduccion.rediris.es/loca4.htm

Según cálculos de la American Translators Association (ATA), el 10 % de la producción de traducción mundial se centra en la traducción literaria. Si esos datos se ajustan a la realidad actual, dentro de ese 90 % restante la localización puede disfrutar de una buena porción de la tarta. LISA (2003: 21) en su Guía de introducción al sector de la localización habla de las cifras siguientes: LISA considera que el tamaño total del sector de la localización mundial asciende a un mínimo de 3.700 millones de dólares anuales, con una cifra probable en torno a los 5.000 millones (algunos cálculos apuntan más alto a los 15.000 millones de dólares). El segmento de la tecnología de la información dentro del sector de la localización por sí sola mueve cerca de los 10.000 millones de dólares (con la inclusión de todos los mercados verticales, este número es sustancialmente superior). Por poner una comparación, las cifras recientes del tamaño del sector de la traducción se encuentran entre los 11.000 y los 18.000 millones de dólares (según la Asociación de Traductores Americanos, ATA) y los 30.000 millones de dólares (según la Comisión Europea).

Bastin, Georges L. “Evaluating Beginners’ Re-expression and Creativity: A Positive Approach.” The Translator vol. 6, n. 2 (2000).  pp.: http://www.stjerome.co.uk/periodicals/viewfile.php?id=193&type=pdf

Although trunslation may be considered a tvo (or three) phase cornmunication process, consisting of comprehension -(conceptualization) – re-expression, most theoretical and pedagogical studies have been devoted to comprehension and conceptualization. There is, however, an increasing need to establish a theoretical basis for the third phase since, contrury to Boileau’s dictum (that well conceived ideas can be easily expressed), even when comprehension is complete, words do not come easily. If re-expression is to be better taught, evaluation of reexpression must be better thought. This paper focuses on the evuluation of re-expression in translation. Based on an in-depth study of various English texts translated into French by sorne 38 ftrst-year translation students

Bel, N. Â Ria “Review of Dubkjaer, Laila; Hemsen, Holmer; Minker, Wolfgang (eds) Evaluation of Text and Speech Systems.” Machine Translation vol. 21, n. 1 (2007).  pp. 73-76. http://dx.doi.org/10.1007/s10590-008-9037-2

Beverly, Adab “The Translation of Advertising A Framework for Evaluation.” Babel: Revue internationale de la traduction/International Journal of Translation vol. 47, n. 2 (2002).  pp.: http://www.ebsco.com/online/direct.asp?ArticleID=1GW7R7QFH6JVJQ8HQBQ9

In Towards a Science of Translating, (1969) Nida asserts that “There will always be a variety of valid answers to the question, ‘Is this a good translation?’” In the professional translation environment, the whole question of how to evaluate a translated text is one which poses a challenge to the client, to the translator and to those responsible for training the translator. Much has been written about the difficulty of identifying (objectively) verifiable and perhaps more widely generalisable criteria for this form of evaluation, which needs to relate to the functional adequacy (Nord 1997, Toury 1995) of the translated text for its intended purpose. Such criteria would be equally welcome as guidelines for the actual translation process, to assist the translator in selecting from possible translation alternatives. Think aloud protocols have tried to identify what goes on the ‘lack box’ and the cognitive processes involved in the process of text production (Kussmaul 1991, 1995). However, TAPS are a means to an end, the end being the aim of achieving a better understanding of the process in order to minimise the occurrence of potential errors and rationalise and optimise the process. This article attempts to show how Descriptive Analysis (see Toury 1995) of text pairs can highlight potentially successful strategy types, in relation to aspects of a functionalist approach to text production. Having determined which text production criteria can be of use in evaluating the potential success of a translation choice within a text, it should be possible to formulate a set of guidelines against which translators could test choices.at micro-and macro-textual levels.

Bowker, Lynne “A Corpus-Based Approach to Evaluating Student Translations.” The Translator vol. 6, n. 2 (2000).  pp.: http://www.stjerome.co.uk/periodicals/viewfile.php?id=191&type=pdf

Translation evaluation is highly problematic because of its subjective nature. In a translation classroom, efforts must be made to develop un approach to translation evaluation thut enables evuluators to provide objective and constructive feedback to their students. This article describes a specially-designed Evaluation Corpus and presents an experiment which demonstrates that such a corpus can be used to significantly reduce the subjective element in translation evaluation and illustrates that this reduced subjectivity will benefit both evaluators and students.

Bowker, Lynne “Towards a Methodology for a Corpus-Based Approach to Translation Evaluation.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/002135ar.pdf

Translation evaluation is undoubtedly one of the most difficult tasks facing a translator trainer. It is unlikely that there will ever be a ready-made formula that will transform this task into a simple one; however, this article suggests that the task can be made some what easier by using a specially designed Evaluation Corpus that can act as a benchmark against which translator trainers can compare student translations.

Brunette, Louise “Towards a Terminology for Transiation Quality Assessment : A Comparison of TQA Practices.” The Translator vol. 6, n. 2 (2000).  pp.: http://www.stjerome.co.uk/periodicals/viewfile.php?id=190&type=pdf

Recent research on the revision and assessment of general texts has revealed that the terms and concepts used in discussing this process are somewhat confused, hence the need to map out the terminology used in various evaluative practices. This article offers an overview of translation assessment and attempts to define the key terms specific to this field, including subfields such us translation management quality control (assessment; formative revision) as well us revision theory (assessment criteria; purpose). Each concept and term is discussed at length and exemplified. The article focuses initially on various assessment procedures, including pragmatic revision, translation quality assessment, quality control, didactic revision, and fresh look’. For these procedures to be scientifically credible and ethically acceptable, they must be hased on clearly defined criteria. Thus, the second part of the article puts forvard criteria which have been delimited and duly tested in prior research, namely: logic, context, purpose and language norm.

Buendía Castro, Miriam and José Manuel Ureña Gómez-Moreno “¿Cómo diseñar un corpus de calidad?: parámetros de evaluación.” Sendebar: Revista de la Facultad de Traducción e Interpretación vol., n. 21 (2010).  pp. 165-180.

Cáceres Würsig, Ingrid, Luis Pérez González, et al. “Calidad y traducción: perspectivas académicas y profesionales.” Panacea : boletín de medicina y traducción vol. 5, n. 16 (2004).  pp.: http://www.medtrad.org/panacea/PanaceaAnteriores.htm

Los días 25, 26 y 27 de febrero de 2004 se celebraron las IV Jornadas sobre la Formación y la Profesión del Traductor e Intérprete, «Calidad y traducción: perspectivas académicas y profesionales», organizadas por el Departamento de Traducción e Interpretación de la Facultad de Comunicación y Humanidades de la Universidad Europea de Madrid. Patrocinadas por destacadas empresas del sector (Star, Reinisch, Déjà Vu y Hermes), estas IV Jornadas atrajeron un total de 251 asistentes de 19 países. Para abordar el tema general de las Jornadas, se invitó a tres ponentes de distintos ámbitos de la enseñanza y el ejercicio de la traducción y la nterpretación. Emma Wagner habló de la calidad en la traducción en organismos internacionales, Daniel Gile, de la calidad en la enseñanza de la traducción y la interpretación y, finalmente, Miguel Núñez, de la participación de la ACT en la elaboración de una norma de calidad para los servicios profesionales de traducción. (más información sobre las actas a través de birgit.s@ing.fil.uem.es).

Callison-Burch, Chris and Raymond S. Flournoy “A Program for Automatically Selecting the Best Output from Multiple Machine Translation Engines.” Congreso sobre traducción automática vol., n. 8 (2001).  pp.: http://www.eamt.org/summitVIII/papers/callison.pdf

This paper describes a program that automatically selects the best translation from a set of translations produced by multiple commercial machine translation engines. The program is simplified by assuming that the most fluent item in the set is the best translation. Fluency is determined using a trigram language model. Results are provided illustrating how well the program performs for human ranked data as compared to each of its constituent engines.

Campbell, Stuart “Critical Structures in the Evaluation of Translations from Arabic into English as a Second Language.” The Translator vol. 6, n. 2 (2000).  pp.: http://www.stjerome.co.uk/periodicals/viewfile.php?id=192&type=pdf

It is argued in this paper that the output of translators working into English as a second lunguage can be evuluated by means of examining their ability to translate certain critical structures. These claims are made on the basis of duta-based research with the support of a cognitive theory about language processing during translation, and an analytical procedure that models the decision pathways of translators.

Cancelo, Pablo “Evaluation of machine translation systems.” Últimas Corrientes Teóricas En Los Estudios de Traducción y sus Aplicaciones vol., n. (2001).  pp.:

Machine translation products are currently receiving a considerable amount of hype. At one end of the scale are mass media reports on one product after another that use the latest magical technique to produce nearly perfect translations. Unfortunately, these reports are usually based on the manufacturers’ promotional press releases, and make it into print without any attempt at verification or review. At the other end of the spectrum are the detractors of machine translation, those who assert that all translation programs are useless, and the whole effort is a meaningless waste of time. In the middle, however, is another group of people – of which this researcher is one – who hold that machine translation technology, while not perfect, has progressed in recent years and some of the systems can render a source language document into an understandable, though rough, target language translation.

Clifford, Andrew “Discourse Theory and Performance-Based Assessment : Two Tools for Professional Interpreting.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/002345ar.pdf

This article examines interpreter assessment and draws attention to the limits of a lexico-semantic approach. It proposes using features of discourse theory to identify so me of the competencies needed to interpret and suggests developing assessment instruments with the technical rigour common in other fields. The author gives examples of discursive features in interpretation and shows how these elements might be used to construct a rubric for assessing interpreter performance.

Colina, Sonia “Translation Quality Evaluation: Empirical Evidence for a Functionalist Approach.” The Translator vol. 14, n. 1 (2008).  pp. 97-134. http://www.stjerome.co.uk/periodicals/journal.php?j=72&v=563&i=564

Following a review of existing approaches to translation quality evaluation, this paper describes a proposal for evaluation that addresses some of the deficiencies found in these models. The proposed approach is referred to as componential because it evaluates components of quality separately, and functionalist, because evaluation is carried out relative to the function specified for the translated text. In order to obtain some empirical evidence for the functionalist/componential approach, a tool was developed and pilot-tested for inter-rater reliability. In addition, the research project sought to obtain some data on qualifications of raters/users and their performance using the tool. Forty raters were asked to use the tool to rate three translated texts. The texts selected for evaluation consisted of reader-oriented health education materials. Raters were bilinguals, professional translators and language teachers. Some basic training was provided. Data was collected by means of the tool and a questionnaire. Results indicate good inter rater reliability for the tool; teachers’ and translators’ ratings were more alike than those of bilinguals; bilinguals were found to rate higher and faster than the other groups. The results provide support for further research and testing of this tool and offer evidence in favour of the approach proposed.

Colina, Sonia “Further evidence for a functionalist approach to translation quality evaluation.” Target: International Journal on Translation Studies vol. 21, n. 2 (2009).  pp. 235-264. http://dx.doi.org/10.1075/target.21.2.02col

Colina (2008) proposes a componential-functionalist approach to translation quality evaluation and reports on the results of a pilot test of a tool designed according to that approach. The results show good inter-rater reliability and justify further testing. The current article presents an experiment designed to test the approach and tool. Data was collected during two rounds of testing. A total of 30 raters, consisting of Spanish, Chinese and Russian translators and teachers, were asked to rate 4-5 translated texts (depending on the language). Results show that the tool exhibits good inter-rater reliability for all language groups and texts except Russian and suggest that the low reliability of the Russian raters’ scores is unrelated to the tool itself. The findings are in line with those of Colina (2008).

Conde Ruano, Tomás “Propuestas para la evaluación de estudiantes de traducción.” Sendebar: Revista de la Facultad de Traducción e Interpretación vol., n. 20 (2009).  pp. 231-255.

Se presenta un estudio realizado a partir del análisis de una situación problemática: la evaluación de traducciones en el entorno académico. Primero se detallan las circunstancias más habituales en las que se lleva a cabo, luego se describen los principales problemas y finalmente se esgrimen propuestas concretas, fundamentadas en datos extraídos tanto de la investigación empírica, como de otros trabajos teóricos.Se propone, en resumen, el uso de la evaluación continua, la evaluación ciega, sistemas holísticos en los que se integren aspectos subjetivos y se valore el aprendizaje por encima de la actuación concreta, una redefinición de los roles adoptados por profesores y alumnos, mayor flexibilidad y transparencia en la aplicación de criterios de evaluación, el fomento del aprendizaje autónomo y la colaboración entre los estudiantes y exámenes más cercanos a la práctica profesional real. [English abstracts ] This paper analyses a problematic situation: translation evaluation in the teaching environment. After describing the circumstances under which this activity is carried out, the paper focuses on the main problems concerned and finally makes various proposals based on data both from empirical research and from other theoretical studies. In short, the paper argues in favour of continuous assessment, blind evaluation and holistic systems involving subjective aspects and the assessment of learning, rather than of specific performance. In addition, a fresh interpretation of the roles adopted by teachers and students is proposed, together with a more flexible and transparent application of evaluation criteria, the promotion of self-regulated learning and collaboration between students, and exam conditions that resemble actual professional practice.

Darwin, Maki “Trial and Error: An Evaluation Project on Japanese <> English MT Output Quality.” Congreso sobre traducción automática vol., n. 8 (2001).  pp.: http://www.eamt.org/summitVIII/papers/darwin.pdf

This paper describes a small-scale but organized attempt to evaluate output quality of several Japanese MT systems. The project also served as the first experiment of the implementation of the in-house MT evaluation guidelines created in 2000. Since time was limited and the budget was not infinite, it was launched with the following compact components: Five people; 300 source sentences per language pair; and 160 hours per evaluator. The quantitative results showed noteworthy phenomena. Although the test materials had been presented in a way that evaluators could not identify the performance of any particular system, the results were quite consistent.

Dean, Robyn K. and Robert Q. Pollard Jr “Effectiveness of Observation-Supervision Training in Community Mental Health Interpreting Settings.” redit: Revista electrónica de didáctica de la traducción y la interpretación vol., n. 3 (2009).  pp. 1-17. http://dialnet.unirioja.es/servlet/extart?codigo=3150216

Observation-supervision (O-S) is a problem-based learning approach to interpreter education. This mixed-methods study implemented O-S over four geographically diverse iterations in community mental health settings. Forty American Sign Language interpreters participated in O-S groups and forty others comprised two control groups. Measures included a pre-post test of mental health knowledge, a mental health interpreting practical exam, and objective and subjective participant evaluations. The results indicate that O-S was superior to an equivalent amount of didactic training in imparting mental health knowledge. Practical exam and participant evaluation results indicate that O-S was more effective in imparting interpreting judgment and ethical decision-making skills. O-S can be employed in other specialized interpreting practice settings and with spoken as well as signed language interpreters.

Delisle, Jean “L’évaluation des traductions par l’historien.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/002514ar.pdf

Une méthode rigoureuse d’évaluation des traductions est nécessaire a I’historien tout comme au pédagogue de la traduction. En nous inspirant des travaux théoriques d’Henri Meschonnic, nous tenterons de démontrer que I’évaluation des traductions du passé – des textes littéraires principalement – ne saurait se faire a partir des regles édictées par les traducteurs-théoriciens auteurs de traités sur la maniere de traduire, et que I’analyse philologique et la linguistique différentielle ne suffisent pas non plus pour apprécier la réussite ou I’échec d’un texte traduit. l’historien de la traduction cherchera plutót a savoir si I’reuvre traduite a I’historicité de I’reuvre originale, si la traduction-recréation a inventé sa propre poétique et remplacé les problemes de langue par des solutions de discours. Traduire uniquement le sens d’une reuvre comporte le risque d’escamoter sa littérarité et sa poétique, ce qui aboutit a la production d’un non-texte.

Dewaele, Jean-Marc “Évaluation du texte interprété: sur quoi se basent les interlocuteurs natifs?” Meta vol. 39, n. 1 (1994).  pp.: http://www.erudit.org/revue/meta/1994/v39/n1/002561ar.pdf

Una de las reglas de oro de la interpretación es que sólo se puede trabajar hacia una lengua materna. No obstante, con frecuencia se exige a los profesionales que interpreten hacia una segunda lengua. Pero, ¿cuáles son las características lingüísticas de este tipo de discurso y el juicio que el propio intérprete tiene el propio profesional sobre su capacidad de comunicación en una lengua de llegada? Para responder a esta pregunta se ha llevado a cabo un estudio que recoge la opinión de un número determinado de nativos sobre un orador que también lo es. Las variables en las que se basan los primeros para emitir sus opiniones tienen que ver sobre todo con la riqueza léxica del discurso, las dudas o interrupciones, etc. Sin embargo, sería necesario realizar un análisis similar sobre lo que piensan los nativos de un orador no nativo.

Dhonnchadha, U. Â, Caoilfhionn Nic Ph+Â-Íid+Â-¡N, et al. “Design, Implementation and Evaluation of an Inflectional Morphology Finite State Transducer for Irish.” Machine Translation vol. 18, n. 3 (2003).  pp. 173-193. http://dx.doi.org/10.1007/s10590-004-2480-9

Minority languages must endeavour to keep up with and avail of language technology advances if they are to prosper in the modern world. Finite state technology is mature, stable and robust. It is scalable and has been applied successfully in many areas of linguistic processing, notably in phonology, morphology and syntax. In this paper, the design, implementation and evaluation of a morphological analyser and generator for Irish using finite state transducers is described. In order to produce a high-quality linguistic resource for NLP applications, a complete set of inflectional morphological rules for Irish is handcrafted, as is the initial test lexicon. The lexicon is then further populated semi-automatically using both electronic and printed lexical resources. Currently we achieve coverage of 89% on unrestricted text. Finally we discuss a number of methodological issues in the design of NLP resources for minority languages.

Espunya, Anna “Contrastive and translational issues in rendering the English progressive form into Spanish and Catalan: an informant-based study’.” Meta vol. 46, n. 3 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n3/002710ar.pdf

This is a study on the formal correspondences for the English progressive in translations from English to Spanish and Catalan, with a special focus on the choice between simple and progressive forms. Its methodological approach includes the participation of informants both as translators and as evaluators of published translations. The paper discusses both the language-internal and task-related factors that play a role in the choice of verb forms.

Estival, Dominique “Karen Sparck Jones & Julia R. Galliers, Evaluating Natural Language Processing Systems: An Analysis and Review. Lecture Notes in Artificial Intelligence 1083.” Machine Translation vol. 12, n. 4 (1997).  pp. 375-379. http://dx.doi.org/10.1023/A:1007918307730

Fan, May and Xu Xunfeng “An evaluation of an online bilingual corpus for the self-learning of legal English.” System vol. 30, n. 1 (2002).  pp.: http://www.sciencedirect.com/science/article/B6VCH-44HX9WX-1/2/a9eccf25e787c2cb5f0aaf1d2b19ba49

Based on a relatively simple but innovative idea of inserting hyperlinks at the sentence level between parallel texts, a bilingual corpus of legal and documentary texts in English and Chinese has been created and made available online together with a web-based concordancer. In addition to introducing such a corpus, this paper reports a study which seeks to evaluate the usefulness of the corpus in the self-learning of legal English. The subjects involved were a group of Chinese students doing a degree in Translation in a university of Hong Kong, where English Common Law is still used after the handover in 1997 when the sovereignty of Hong Kong was restored from Britain to China. The instruments for data collection included two comprehension tasks, a questionnaire and a follow-up interview. Findings of the study indicate that students considered the bilingual corpus useful as they needed both language versions in the understanding of legal provisions though they were found to rely more on Chinese. Interesting data in relation to how users of the bilingual corpus switched between the two languages have also been obtained. This paper also investigates how the inherent characteristics of legal English contribute to the comprehension difficulty of L2 learners irrespective of the help obtained from the bilingual corpus.

Farreús, Mireia, Marta R. Costa-Jussà, et al. “Study and correlation analysis of linguistic, perceptual, and automatic machine translation evaluations.” Journal of the American Society for Information Science and Technology vol. 63, n. 1 (2012).  pp. 174-184. http://dx.doi.org/10.1002/asi.21674

Evaluation of machine translation output is an important task. Various human evaluation techniques as well as automatic metrics have been proposed and investigated in the last decade. However, very few evaluation methods take the linguistic aspect into account. In this article, we use an objective evaluation method for machine translation output that classifies all translation errors into one of the five following linguistic levels: orthographic, morphological, lexical, semantic, and syntactic. Linguistic guidelines for the target language are required, and human evaluators use them in to classify the output errors. The experiments are performed on English-to-Catalan and Spanish-to-Catalan translation outputs generated by four different systems: 2 rule-based and 2 statistical. All translations are evaluated using the 3 following methods: a standard human perceptual evaluation method, several widely used automatic metrics, and the human linguistic evaluation. Pearson and Spearman correlation coefficients between the linguistic, perceptual, and automatic results are then calculated, showing that the semantic level correlates significantly with both perceptual evaluation and automatic metrics.

Fawcett, Peter “Translation in the Broadsheets.” The Translator vol. 6, n. 2 (2000).  pp.: http://www.stjerome.co.uk/periodicals/viewfile.php?id=196&type=pdf

Despite the decline in literary translation into English documented by sorne scholars, the broadsheets and review journals published in the United Kingdom continue to invite reviewers – who are themselves usually creative authors in their own right -to review translated literuture. Occasionally, broader questions of translation are also discussed. This puper examines a sample of such reviews in un atternpt to uncover the parameters defining the usually implicit framework within which translation criticism is conducted and what seems to be the overvhelrningly prefrrred translation strategy.

Fuji, Masaru and Hitoshi Isahara “Evaluation Method for Determining Groups of Users Who Find MT “Useful”.” Congreso sobre traducción automática vol., n. 8 (2001).  pp.: http://www.eamt.org/summitVIII/papers/fuji.pdf

We used two commercial E-J MT systems to prepare MT translated reading texts. We used two systems, in order to find out if the obtained results are system dependent.

Fulford, Heather “Translation Tools: An Exploratory Study of their Adoption by UK Freelance Translators.” Machine translation vol. 16, n. 4 (2001).  pp.: http://ipsapp007.lwwonline.com/content/getfile/4598/16/3/fulltext.pdf

The rising demand for translations over the last few decades has led to the recognition that software tools were urgently needed to help increase translators’ productivity, and to support them in their efficient and effective delivery of accurate and consistent translations in ever-shorter time periods (Lang and Bennett, 2000: 203). In order to help inform and guide this software development, a number of researchers discussed the nature of the support required by translators.

Gabr, Moustafa “Program Evaluation : A Missing Critical Link in Translator Training.” The Translation Journal vol. 5, n. 1 (2001).  pp.: http://accurapid.com/journal/15training.htm

Translation, being a craft on the one hand, requires training, i.e. practice under supervision, and being a science on the other hand, has to be based on language theories. Therefore, any sound approach to translation teaching has to draw on proper training methodologies. Training focuses on the improvement of the knowledge, skills and abilities of the individual, and it is functional and relevant only when it is evaluated (Zenger and Hargis, 1982). When we evaluate a training course, we actually evaluate its effectiveness, i.e. we measure the achievement of its objectives. A training course can be effective in meeting some objectives and be ineffective in meeting others. For example, a translation course may accomplish its objective of improving the the students’ text analysis skills and fail in promoting their cross-cultural awareness.

Gamon, Michael, Hisami Suzuki, et al. “Using Machine Learning for System-Internal Evaluation of Transferred Linguistic Representations.” Congreso sobre traducción automática vol., n. 8 (2001).  pp.: http://www.eamt.org/summitVIII/papers/gamon.pdf

We present an automated, system-internal evaluation technique for linguistic representations in a large-scale, multilingual MT system. We use machine-learned classifiers to recognize the differences between linguistic representations generated from transfer in an MT context from representations that are produced by ‘native’ analysis of the target language. In the MT scenario, convergence of the two is the desired result. Holding the feature set and the learning algorithm constant, the accuracy of the classifiers provides a measure of the overall difference between the two sets of linguistic representations: classifiers with higher accuracy correspond to more pronounced differences between representations. More importantly, the classifiers yield the basis for error-analysis by providing a ranking of the importance of linguistic features. The more salient a linguistic criterion is in discriminating transferred representations from ‘native’ representations, the more work will be needed in order to get closer to the goal of producing native-like MT. We present results from using this approach on the Microsoft MT system and discuss its advantages and possible extensions.

Garcia Alvarez, Ana Maria “Der translatoerische Kommentar als Evaluationsmodell der studentischen Überzetzungsprozesse.” Lebende sprachen vol. 53, n. 1 (2008).  pp. 26-31.

el comentario como un modelo de evaluación de los estudiantes de traducción

Gerzymisch-Arbogast, Heidrun “Equivalence Parameters and Evaluation.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/002886ar.pdf

Cet article traite du raje des parametres d’équivalence dans I’évaluation de traductions. Apres un rapide survol des discussions inhérentes au concept même de I’équivalence, nous proposons d’examiner ce concept sur deux niveaux: celui du systeme A la base duquelles criteres d’évaluation sont établis et, celui du texte qui permet la sélection des criteres spécifiques pour évaluer le texte en question ainsi qu’une hiérarchisation de ces criteres (du point de vue de I’évaluateur). Au niveau du systeme nous proposons d’inclure dan s le catalogue des criteres d’évaluation les parametres de cohérence et de réseaux thématiques et/ou isotopiques. Au niveau du texte nous allons discuter quelques variances de traductions inhérentes A ces parametres.

Ghassan Hassan Al, Shatter “Implementation and Evaluation of a New Learning Approach in Arabic: Implications for Translator Training.” Translation Watch Quarterly vol. 3, n. 1 (2007).  pp.:

This paper discusses planning and implementing a new learning approach for teaching Arabic as part of the University General Requirements Unit at the United Arab Emirates University. The new learning approach challenges the traditional teaching methodology used in the United Arab Emirates. The planning and implementation scheme is analyzed, and training, teaching style, and classroom management processes are evaluated. The study examines responses by the University administration, faculty members, and students to the introduction of this new teaching methodology. It suggests that teaching standard Arabic as part of the University’s general education requirements is important for Arab students who wish to be successful in their studies at the University as well as in their professional lives. The implications for translators are also addressed.

Gile, Daniel “L’évaluation de la qualité de l’interprétation en cours de formation.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/002890ar.pdf

L’évaluation de la qualité de I’interprétation en cours de formation differe de I’évaluation professionnelle essentiellement en raison de sa fonction d’orientation et de la part importante qu’elle accorde au processus d’interprétation, par opposition au seul discours d’arrivée. Il est pro posé de faire appel a une évaluation orientée processus en début de formation, en raison de ses avantages psychologiques aussi bien que pour son efficacité dans I’orientation des étudiants. 1I faudra toutefois passer progressivement a une évaluation orientée produit afin de rendre plus puissante I’action de I’enseignant sur le parachevement du produit et pour préparer les étudiants aux tests d’aptitude de fin de parcours. L:éventuelle différence entre les normes des enseignants et celles du marché ne pose pas de probleme fondamental tant qu’elle porte sur le niveau requis, plus élevé en formation, et non pas sur les normes et stratégies de I’interprete.

Goff-Kfouri, Carol Ann “Testing and Evaluation in the Translation Classroom.” The Translation Journal vol. 8, n. 3 (2004).  pp.: http://accurapid.com/journal/29edu.htm

It is not at all uncommon today for professional translators to be invited to teach a course at a university. Many translators, though flattered at being invited to teach, are hesitant to accept the position due to their lack of pedagogical knowledge. One particular problematic area is that of marking translations and making decisions on student competence. This paper presents the basic information professional translators need to know before they enter the classroom, and outlines possible testing strategies they might use to make their teaching experience enriching and valuable for themselves as well as their students.

Guessoum, Ahmed and Rached Zantout “A Methodology for a Semi-Automatic Evaluation of the Lexicons of Machine Translation Systems.” Machine translation vol. 16, n. 2 (2001).  pp.: http://ipsapp009.lwwonline.com/content/getfile/4598/14/3/abstract.htm

The lexicon is a major part of any Machine Translation (MT) system. If the lexicon of an MT system is not adequate, this will affect the quality of the whole system. Building a comprehensive lexicon, i.e., one with a high lexical coverage, is a major activity in the process of developing a good MT system. As such, the evaluation of the lexicon of an MT system is clearly a pivotal issue for the process of evaluating MT systems. In this paper, we introduce a new methodology that was devised to enable developers and users of MT Systems to evaluate their lexicons semi-automatically. This new methodology is based on the idea of the importance of a specific word or, more precisely, word sense, to a given application domain. This importance, or weight, determines how the presence of such a word in, or its absence from, the lexicon affects the MT system’s lexical quality, which in turn will naturally affect the overall output quality. The method, which adopts a black-box approach to evaluation, was implemented and applied to evaluating the lexicons of three commercial English–Arabic MT systems. A specific domain was chosen in which the various word-sense weights were determined by feeding sample texts from the domain into a system developed specifically for that purpose. Once this database of word senses and weights was built, test suites were presented to each of the MT systems under evaluation and their output rated by a human operator as either correct or incorrect. Based on this rating, an overall automated evaluation of the lexicons of the systems was deduced.

Guessoum, A. and R. Zantout “Semi-automatic Evaluation of the Grammatical Coverage of Machine Translation Systems.” Congreso sobre traducción automática vol., n. 8 (2001).  pp.: http://www.eamt.org/summitVIII/papers/guessoum.pdf

In this paper we present a methodology for automating the evaluation of the grammatical coverage of machine translation (MT) systems. The methodology is based on the importance of unfolded grammatical structures, which represent the most basic syntactic pattern for a sentence in a given language. A database of unfolded grammatical structures is built to evaluate the parser of any NLP or MT system. The evaluation results in an overall measure called the grammatical coverage. The results of implementing the above approach on three English-to-Arabic commercial MT systems are presented.

Guessoum, Ahmed and Rached Zantout “A Methodology for Evaluating Arabic Machine Translation Systems.” Machine Translation vol. 18, n. 4 (2004).  pp. 299-335. http://dx.doi.org/10.1007/s10590-005-2412-3

This paper presents a methodology for evaluating Arabic Machine Translation (MT) systems. We are specifically interested in evaluating lexical coverage, grammatical coverage, semantic correctness and pronoun resolution correctness. The methodology presented is statistical and is based on earlier work on evaluating MT lexicons in which the idea of the importance of a specific word sense to a given application domain and how its presence or absence in the lexicon affects the MT system+óGé¼Gäós lexical quality, which in turn will affect the overall system output quality. The same idea is used in this paper and generalized so as to apply to grammatical coverage, semantic correctness and correctness of pronoun resolution. The approach adopted in this paper has been implemented and applied to evaluating four English-Arabic commercial MT systems. The results of the evaluation of these systems are presented for the domain of the Internet and Arabization.

Hagemann, S. “Zur Evaluierung kreativer ¦obersetsungsleistungen.” Lebende sprachen vol. 52, n. 3 (2007).  pp. 102-109.

Zur Evaluierung kreativer ¦obersetsungsleistungen.

Hajmohammadi, A. “Translation Evaluation in a News Agency.” Perspectives-Studies in Translatology vol. 13, n. 3 (2005).  pp.:

In this article, I argue that most approaches to translation evaluation that are central to Translation Studies scholars and teachers are out of touch with market demands. I present the working conditions of translators in a news agency and discuss the evaluation of translation performance in the market. I am particularly keen on calling attention to the differences between academic and market parameters for evaluation. First, there is a presentation of the purpose of evaluation in news agency environments and subsequently, I describe the assessment of news translation. I finish by examining two parameters of evaluation, which, in my opinion, distinguish translation evaluation in the market from the academy. The suggestions are based on my observations as a translation evaluator in IRIB news agency, Tehran.

Hale, Sandra Beatriz, Nigel Bond, et al. “Interpreting accent in the courtroom.” Target vol. 23, n. 1 (2011).  pp. 48-61. http://www.ingentaconnect.com/content/jbp/targ/2011/00000023/00000001/art00004
http://dx.doi.org/10.1075/target.23.1.03hal

Findings from research conducted into interpreted court proceedings have suggested that it is the interpreters’ rendition that the judiciary and jurors hear and upon which they base their evaluations of witnesses’ testimony. Previous research into the effect of foreign accent of witnesses indicated particular foreign accents negatively influence mock jurors’ evaluations of the testimony. The aim of this study was to examine the effect of interpreters’ foreign accents on the evaluation of witnesses’ testimony. Contrary to previous research, our results indicated that participants rated the witness more favourably when testimony was interpreted by an interpreter with a foreign language accent. Accented versions were all rated as more credible, honest, trustworthy and persuasive than the non-accented versions. This paper discusses the findings in the light of methodological concerns and limitations, and highlights the need for further research in the area.

Hampshire, Stephen and Carmen Porta Salvia “Traslation and the Internet: evaluating the Quality of Free Online Machine Translators.” Quaderns: Revista de traducció vol., n. 17 (2010).  pp. 197-209. http://ddd.uab.cat/pub/quaderns/11385790n17p197.pdf

The late 1990s saw the advent of free online machine translators such as Babelfish, Google Translate and Transtext. Professional opinion regarding the quality of the translations provided by them, oscillates wildly from the «laughably bad» (Ali, 2007) to «a tremendous success» (Yang and Lange, 1998). While the literature on commercial machine translators is vast, there are only a handful of studies, mostly in blog format, that evaluate and rank free online machine translators.This paper offers a review of the most significant contributions in that field with an emphasis on two key issues: (i) the need for a ranking system; (ii) the results of a ranking system devised by the authors of this paper. Our small-scale evaluation of the performance of ten free machine translators (FMTs) in «league table» format shows what a user can expect from an individual FMT in terms of translation quality. Our rankings are a first tentative step towards allowing the user to make an informed choice as to the most appropriate FMT for his/her source text and thus produce higher FMT target text quality. Durant la darrera dècada del segle xx s introdueixen els traductors online gratuïts (TOG), com poden ser Babelfish, Google Translate o Transtext. L opinió per part de la crítica professional sobre aquestes traduccions es mou des d una ingrata ridiculització (Ali, 2007) a l acceptació més incondicional (Yang y Lange, 1998). Actualment, els estudis valoratius sobre els TOG són realment escassos, la majoria en format blog, mentre que la literatura sobre els traductors comercials és enorme. L article que plantegem aporta una revisió de les principals contribucions i posa l èmfasi bàsicament en dues qüestions: (i) necessitat d un sistema de classificació (un rànquing) i (ii) descripció dels resultats obtinguts pel sistema de classificació ideat pels autors d aquest article.L avaluació que realitzem a petita escala es basa en l anàlisi de l actuació de deu TOG en un rànquing que posa de manifest les expectatives que en termes de qualitat de traducció pot esperar l usuari. El resultat del rànquing ofereix a l usuari els criteris que millor s ajusten a cada cas, per tal d utilitzar un traductor o un altre en funció del text original, i obtenir com a resultat una traducció de qualitat considerable.

Hassani, Ghodrat “A Corpus-Based Evaluation Approach to Translation Improvement.” Meta vol. 56, n. 2 (2011).  pp. 351-373. http://id.erudit.org/iderudit/1006181ar

In professional settings translation evaluation has always been weighed down by the albatross of subjectivity to the detriment of both evaluators as clients and translators as service providers. But perhaps this burden can be lightened, through ongoing evaluator feedback and exchange that foster objectivity among the evaluators while sharpening the professional skills and recognition of the translators. The purpose of this paper is to explore the promising avenues that a corpus-based evaluation approach can possibly offer them. Using the Corpus of Contemporary American English (COCA) for evaluation purposes in a professional setting, the approach adopted for this study regards translation evaluation as a means to a worthwhile end, in a nutshell, better translations. This approach also illustrates how the unique features of the corpus can minimize subjectivity in translation evaluation; this in turn leads to translations of superior quality.

Hatim, Basil “Translating Quality Assessment.” The Translator vol. 4, n. 1 (1998).  pp. 91-100. http://www.stjerome.co.uk/periodicals/viewfile.php?id=119&type=pdf

Review of A Model for Translation Quality Assessment (Tübinger Beiträge zur Linguistik 88). Juliane House. Tübingen: Gunter Narr, 1977/1981. 344 pp. Pb. ISBN 3-87808-088-3.

Helmreich, Stephen and David Farwell “Translation Differences and Pragmatics-Based MT.” Machine translation vol. 13, n. 1 (1998).  pp.: http://ipsapp007.lwwonline.com/content/getfile/4598/4/3/fulltext.pdf

This paper examines differences between two professional translations into English of the same Spanish newspaper article. Among other explanations for these differences, such as outright errors and free variation, we find a significant number of differences are due to differing beliefs on the part of the translators about the subject matter and about what the author wished to say. Furthermore, these differences are consistent with divergent global views of the translators about the likelihood of future events (earthquakes and tidal waves) and about (rational or irrational) reactions of people to such likelihood. We discuss the requirements for a pragmatics-based model of translation that would account for these differences.

Hine-Medina, Carol “Interpreted Psychological Evaluations.” Proteus vol. 13, n. 3 (2004).  pp.: http://www.najit.org/proteus/v13n3/Vol13_No3_Rhine-Medina.PDF

Sooner or later, a judiciary interpreter is bound to come into contact with psychiatric assignments. Exposure to this facet of our judicial system may materialize in a variety of forms. One may be mass calendar calls of yellow-clad (in many counties) inmates claiming or suspected to be unfit to comprehend the charges against them or stand trial, some of whom may have requested removal to state psychiatric facilities. Judges issue rulings in individual hearings and order psychiatric examinations, referred to by section number, depending on the objective of the evaluation.

House, Juliane “Translation Quality Assessment : Linguistic Description vs Social Evaluation.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/003141ar.pdf

The paper first reports on three different approaches to translation evaluation which emanate from different concepts of ‘meaning’ and its raJe in translation. Secondly, a functional-pragmatic model of translation evaluation is described, which features a distinction between different types of translations and versions, and stresses the importance of using a ‘cultural filter’ in one particular type of translation. Thirdly, the influence of English as a worldwide lingua franca on translation processes is discussed, and finally the important distinction between linguistic analysis and social judgement in translation evaluation is introduced, and conclusions for the practice of assessing the quality of a translation are drawn.

Hovy, Eduard, Margaret King, et al. “Principles of Context-Based Machine Translation Evaluation.” Machine translation vol. 17, n. 1 (2002).  pp.: http://ipsapp009.kluweronline.com/IPS/content/ext/x/J/4598/I/17/A/3/abstract.htm #

This article defines a Framework for Machine Translation Evaluation ( FEMTI) which relates the quality model used to evaluate a machine translation system to the purpose and context of the system. Our proposal attempts to put together, into a coherent picture, previous attempts to structure a domain characterised by overall complexity and local difficulties. In this article, we first summarise these attempts, then present an overview of the ISO/IEC guidelines for software evaluation (ISO/IEC 9126 and ISO/IEC 14598). As an application of these guidelines to machine translation software, we introduce FEMTI, a framework that is made of two interrelated classifications or taxonomies. The first classification enables evaluators to define an intended context of use, while the links to the second classification generate a relevant quality model (quality characteristics and metrics) for the respective context. The second classification provides definitions of various metrics used by the community. Further on, as part of ongoing, long-term research, we explain how metrics are analyzed, first from the general point of view of “meta-evaluation”, then focusing on examples. Finally, we show how consensus towards the present framework is sought for, and how feedback from the community is taken into account in the FEMTI life-cycle.

Ji, Meng “Quantifying Phraseological Style in Two Modern Chinese Versions of Don Quijote.” Meta vol. 53, n. 4 (2008).  pp. 937-941. http://id.erudit.org/iderudit/019664ar

L’évaluation de style, ou stylométrie, est depuis toujours l’une des traditions les plus anciennes des études littéraires occidentales. Il semble, toutefois, qu’une telle méthodologie scientifique, aussi connue et de longue date soit-elle, n’ait été que rarement appliquée au domaine de la traduction, contrairement à son application aux textes littéraires originaux. Cette étude, qui porte sur l’emploi stylistique de la phraséologie de deux versions contemporaines chinoises de de Cervantes, se propose d’aborder deux problèmes actuels en matière de stylistique comparée de traduction fondée à partir de corpus, à savoir : l’absence de débat sur la question des unités linguistiques riches sémantiquement dans le cadre de l’évaluation des styles de traduction, ainsi que la nécessité d’expérimenter l’emploi de méthodes et de techniques adaptées à la statistique des corpus et ce, afin de détecter les traits stylistiques en traduction. Il est à espérer que cette étude, qui vise à élargir la structure méthodologique actuelle de la stylistique de traduction, contribuera au développement du domaine de recherche grandissant de la traductologie.Quantifying style, or stylometry, has always been one of the oldest traditions in Western literary studies. It seems, however, that such a well-explored and long-standing scientific methodology has been rarely applied to translations, as opposed to original literary texts. The present paper, which focuses on the stylistic use of phraseology in two contemporary Chinese versions of Cervantes’ shall endeavour to address the two current problems in corpus-based translation stylistics, i.e., the lack of debate on the question of semantically-rich linguistic units in quantifying style of translations, and the need for testing the use of methods and techniques adapted from corpus statistics in detecting stylistic traits in translations. It is hoped that this study, which aims at expanding the current methodological framework for translation stylistics, will help in the development of this growing area of research in Translation Studies.

Kaur, Kulwindr “Translation Accreditation Boards/Institutions in Malaysia.” The Translation Journal vol. 9, n. 4 (2005).  pp.: http://accurapid.com/journal/34malaysia.htm

Presently there are no Translation Accreditation Boards in Malaysia. The researcher was informed of this by Puan Siti Rafiah bt. Sulaiman, the Head of the Translation Section of the Malaysian National Institute of Translation (ITNMB). According to her, ITNMB is still in the process of drawing up translation programmes with the help of translator certification office-holders in America, New Zealand and Australia, i.e., the American Translators Association, New Zealand Translators Association and the Australian Translators Association. According to her, the certification office-holders of these associations will be contacted to evaluate ITNMB’s translation programmes and finally the authorities at ITNMB can have their translation courses accredited by authorities at the Malaysian Board of Accreditation or Lembaga Akreditasi Negara (LAN), which will issue the certificate of accreditation for ITNMB’s translation courses. The authorities at LAN can do this because although ITNMB reports to the government, it is registered under the Register of Companies and thus is still considered a private institution offering its own courses to the public. This has not been achieved as yet, but steps are now being taken in this direction.

Khalilov, Maxim and José Adrián Rodríguez Fonollosa “Comparación y combinación de los sistemas de traducción automática basados en n-gramas y en sintaxis.” Comparison and system combination of n-gram-based and syntax-based machine translation systems vol., n. 41 (2008).  pp. 259-266. http://rua.ua.es/dspace/bitstream/10045/8607/1/PLN_41_31.pdf

En este artículo se comparan dos sistemas basados en dos aproximaciones diferentes de traducción automática: El denominado sistema de la Traducción Automática Aumentado con Sintaxis (SAMT / TAAS), basado en una sintaxis subyacente al modelo basado en frases, y el sistema de traducción automática estadística (TAE) basado en n-gramas en el cual el proceso de traducción está basado en el modelado estocástico del contexto bilingüe. Se realiza una comparación de la arquitectura de los dos sistemas paso a paso y se comparan también los resultados en base a las medidas automáticas de evaluación de la calidad de traducción y los recursos computacionales para una pequeña tarea árabe-inglés que pertenece al dominio de noticias. Finalmente, se combinan las salidas de ambos sistemas para obtener una mejora significativa de la calidad de la traducción. (A)

Ko, Leong “Quality Control versus Quantity Control in Training NAATI Translators and Interpreters.” Translation Watch Quarterly vol. 3, n. 1 (2007).  pp.:

In 2001, the Australian Department of Immigration and Multicultural Affairs introduced a new policy that allowed translation and/or interpreting practitioners with NAATI qualifications as Translators and/or Interpreters to migrate to Australia. Since then, all NAATI-approved programs at this level have been inundated with inquiries and applications. New programs at both public and private training institutes have been approved by NAATI, with many more still likely to be developed in future. This paper looks at various issues in this area, including problems that have been identified with training, issues surrounding quality control, impact on the translation and interpreting market, the role of NAATI in overseeing the quality of training, and the future prospects for translation and interpreting training in Australia. It focuses on the training of NAATI Translators/Interpreters and mainly deals with the Chinese language, including Mandarin in the case of interpreting.

Koby, Geoffrey S. and Brian James Baer “From Professional Certification to the Translator Training Classroom: Adapting the ATA Error Marking Scale.” Translation Watch Quarterly vol. 1, n. 1 (2005).  pp.:

Evaluation of translation quality is a central issue in translation pedagogy. The use of the error marking scale developed by the American Translators Association for the grading of certification exams is discussed as a way to introduce professional standards of error marking into the translator training classroom. The problems of adapting a product-oriented and testing-oriented scale for process-oriented classroom evaluation are explored, as well as the technical details of mathematically adapting the scale to an A-F grading system. An Excel spreadsheet is used to calculate grades and adjust for length of text.

Koh, Sungryong, Jinee Maeng, et al. “Test Suite for Evaluation of English-to-Korean Machine Translation Systems.” Congreso sobre traducción automática vol., n. 8 (2001).  pp.: http://www.eamt.org/summitVIII/papers/koh.pdf

This paper describes KORTERM’s test suite and their practicability. The test-sets have been being constructed on the basis of fine-grained classification of linguistic phenomena to evaluate the technical status of English-to-Korean MT systems systematically. They consist of about 5000 test-sets and are growing. Each test-set contains an English sentence, a model Korean translation, a linguistic phenomenon category, and a yes/no question about the linguistic phenomenon. Two commercial systems were evaluated with a yes/no test of prepared questions. Total accuracy rates of the two systems were different (50% vs. 66%). In addition, a comprehension test was carried out. We found that one system was more comprehensible than the other system. These results seem to show that our test suite is practicable.

Kurz, Chistopher “Translatorisches Qualitätsmanagement als verantwortungsvolles Handeln.” Lebende Sprachen vol. 54, n. 4 (2009).  pp. 146-155.

Translatorisches Qualitätsmanagement als verantwortungsvolles Handeln. Autores: Chistopher Kurz. Localización: Lebende Sprachen, ISSN 0023-9909, Vol. 54, Nº. 4, 2009 , pags. 146-155

Kurz, Ingrid “Conference Interpreting : Quality in the Ears of the User.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/003364ar.pdf

What do the recipients of interpretation mean by ‘good interpretation’? What are the features they consider most important and what do they find irritating? Following a brief overview of user expectation surveys, the paper contends that the target audience is an essential variable in the interpretation equation. Quality of interpretation services is evaluated by users in terms of what they actually receive in relation to what they expected. Consequently, measurements of service quality that do not include user expectations i miss the point.

Lambert, José “Measuring canonization: A reply to Paola Venturi.” Target: International Journal on Translation Studies vol. 21, n. 2 (2009).  pp. 358-363. http://dx.doi.org/10.1075/target.21.2.07lam

The article, “The translator’s immobility: English modern classics in Italy”. by Paola Venturi is an interesting illustration of the insights that can be gathered by international scholars from the perspectives of functional-systemic research as exemplified ­ mainly ­ by Gideon Toury (see http://www.tau.ac.il/~toury/) and Itamar Even-Zohar ( http://www.tau.ac.il/~itamarez/). It is hardly necessary for me to stress the mutual complementarity of these two scholars’ methods: Having introduced (or re-introduced) translation into the cultural dynamics with the aid of the sociologically oriented concept of “norms”, Toury left space for Even-Zohar and to others to deal with the general fluctuations on a variety of scales of cultural value of translated communication, as one among many forms of communication. Whether the conceptual tools that these two scholars have given us are in full harmony with each other, and whether all of their implications have been fully explored is not at issue here. Due to particular circumstances in the 1970’s, their work has often been considered by translation scholars to be peculiarly relevant (only) for literary translation, though in fact the relevance of their concepts far transcends, and very explicitly so, the realms of literary scholarship (on translation). The perceived restriction to the particular sub-areas of translation studies may teach us more about the observers than about the observed.

Langlais, Philippe and Guy Lapalme “Trans Type: Development-Evaluation Cycles to Boost Translator’s Productivity.” Machine Translation vol. 17, n. 2 (2002).  pp. 77-98. http://dx.doi.org/10.1023/B:COAT.0000010117.98933.a0

Abstract We present TransType: a new approach to Machine-Aided Translation in which the human translator maintains control of the translation process while being helped by real-time completions proposed by a statistical translation engine. The TransType approach is first presented through a series of prototypes that illustrate their underlying translation model and graphical interface. The results of two rounds of in situ evaluation of TransType prototypes are discussed followed by a set of lessons learned in these experiments. It will be shown that this approach is valued by translators but given the short time allotted for the evaluation, translators were not able to quantitatively increase their productivity. TransType is compared with other approaches and new perspectives are elaborated for a new version being developed in the context of a Fifth Framework European Community Project.

Larose, Robert “Méthodologie de l’évaluation des traductions.” Meta vol. 43, n. 2 (1998).  pp.: http://www.erudit.org/revue/meta/1998/v43/n2/003410ar.pdf

This article addresses the problems involved in evaluating translated texts. It covers four parameters for evaluation, looks at criteria used in various organizations and concludes with general considerations for ‘fair’ evaluation of texts.

Lauscher, Susanne “Transiation Quality Assessment : Whe re Can Theory and Practice Meet?” The Translator vol. 6, n. 2 (2000).  pp.: http://www.stjerome.co.uk/periodicals/viewfile.php?id=189&type=pdf

Despite increused interest within translations studies to provide orientation for translation quulity ussessment (TQA), academic efforts in this area are still largely ignored, if not explicitly rejected by the profession. The purpose of this paper is to investigate why scientific models for evaluating translations are difficult to apply and to outline a number of ways in which the gap between theoretical approaches and practical needs may be negotiated.

Lee-Jahnke, Hannelore “Aspects pédagogiques de l’évaluation des traductions.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/003447ar.pdf

Partant de I’adage pédagogique qu’on ne saurait bien faire que ce dont on comprend parfaitement I’objectif, notre propos est de montrer des approches novatrices dans les trois domaines suivants: ? l. Différentes méthodes pour sensibiliser les étudiants I’évaluation en général ; 2. L:évaluation « formative» telle qu’elle est pratiquée dans nos cours; 3. Projet sur une évaluation «sommative ».

Leppihalme, Ritva “The Two Faces of Standardization: On the Transíation of Regionalisms in Literaty Dialogue.” The Translator vol. 6, n. 2 (2000).  pp.: http://www.stjerome.co.uk/periodicals/viewfile.php?id=194&type=pdf

Non-standard language varieties such as dialect and sociolect are known to present serious problems for translators. The function(s) they serve in the source text can be weakened or lost in translation because there may well be no target-language variety with sufficiently similar situational characteristics. On the other hand, the common strategy ofrendering non-standard sourcev language dialogue by standard target-language dialogue can lead to loss of the linguistic identity of the work and its author. This paper examines standardization through the English translation of one of the Finnish author Kalle Päätalo’s early novels. It suggests that standardization is not necessarily only negative in its results, as target readers may be more interested in other aspects of the target text than its linguistic identity.

Lewis, Amber L. “The Practical Implications of a Minimum Machine Translation Unit.” Babel: Revue internationale de la traduction/International Journal of Translation vol. 43, n. 2 (1997).  pp.:

A great deal of speculation dominates the translation industry with regard to the effectiveness of (MT) Machine Translation, or translation software. This project investigates the conclusions of Bennet (1994) about the size of the UT (unit of translation), based on the raw translations of a sample text as produced by four competitive PC programs. These programs are all transfer systems, which employ a minimum UT, such as a single noun phrase. The sample text is an authentic business correspondence text. A linguistic analysis of the four translations is performed. Results of the analysis show that numerous errors are committed which require the intervention of the professional translator. This research concludes that, for this type of text, a transfer system is not cost-effective because it will still require extensive human editing. The semantic errors particularly demonstrate the need to emphasize research towards the development of translation software which incorporates a larger UT.

Low, G. “Evaluating translations of surrealist poetry: Adding note-down protocols to close reading.” Target: International Journal on Translation Studies vol. 14, n. 2 (2003).  pp.: http://ejournals.ebsco.com/direct.asp?ArticleID=J2H0P8B3PT4NT14395JH

Evaluating translations of poetry will always be difficult. The paper focuses on the problems posed by French surrealist poetry, where the reader was held to be as important as the writer in creating interpretations, and argues that evaluations involving these poems inevitably require reader-response data. The paper explores empirically, in the context of André Breton’s ‘L’Union libre’, whether a modification of Think-Aloud procedure, called Note-Down, applied both to the original text and to three English translations, can contribute useful information to a traditional close reading approach. The results suggest that comparative Note-Down protocols permit simple cost-benefit analyses and allow one to track phenomena, like the persistence of an effect through the text, which might be hard to obtain by other methods.

Martínez Melis, Nicole “Evaluation et didactique de la traduction: le cas de la traduction dans la langue étrangère.” Tesis Doctorals en Xarxa (TDX) vol., n. (2002).  pp.: http://www.tdx.cbuc.es/TDX-1116101-145109/index.html

Cette thèse qui se situe dans la branche appliquée de la traductologie propose des procédures, des tâches et des critères pour l’évaluation – dans sa fonction sommative – de la compétence de traduction de l’étudiant dans le cadre de la didactique de la traduction dans la langue étrangère. Elle étudie l’évaluation en tant qu’objet de recherche des sciences de l’éducation, en explique l’évolution, les modèles et les notions-clés. Elle aborde l’évaluation en traduction, délimite trois domaines de l’évaluation en traduction – évaluation des traductions des textes littéraires et sacrés, évaluation dans l’activité professionnelle de la traduction, évaluation dans la didactique de la traduction – et dégage la spécificité de chacun de ces domaines ainsi que les apects qu’ils ont en commun. Texto completo: Parte 1: http://www.tdx.cbuc.es/TESIS_UAB/AVAILABLE/TDX-1116101-145109//nmm1de2.pdf . Parte 2: http://www.tdx.cbuc.es/TESIS_UAB/AVAILABLE/TDX-1116101-145109//nmm2de2.pdf

Martinez Melis, Nicole and Amparo Hurtado “Assessment in Translation Studies : Research Needs.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/003624ar.pdf

On the whole, most research into assessment in translation only concentrates on one area-evaluation of translations of literary and sacred texts – and other areas are ignored. In fact, this field of research includes two other areas, each with its own characteristics: assessment of professionals at work and assessment of trainee translators. Starting with this presupposition, we describe the three areas and analyze the notion of translation assessment, so as to define the characteristics of each area: objects, types, functions, aims and means of assessment. Next, we discuss the question of translation competence, and the concepts of translation problems and translation errors, in arder to reach a general principie that should be applied in al! assessment. Finally, we suggest assessment instruments to be used in teaching translation and make suggestions for research in assessing translator training, an area that has long been neglected and deserves serious attention.

Mayor, Aingeru, Iñaki Alegria, et al. “Evaluación de un sistema de traducción automática basado en reglas o por qué BLEU sólo sirve para lo que sirve.” Evaluation of a Rule-Based Machine Tranlation system or why BLEU is only useful for what it is meant to be used vol., n. 43 (2009).  pp. 197-205. http://www.sepln.org/revistaSEPLN/revista/43/articulos/art22.pdf

Matxin es un sistema de traducción automática basado en reglas que traduce a euskera. Para su evaluación hemos usado la métrica HTER que calcula el coste de postedición, concluyendo que un editor necesitaría cambiar 4 de cada 10 palabras para corregir la salida del sistema. La calidad de las traducciones del sistema Matxin ha podido ser comparada con las de un sistema basado en corpus, obteniendo el segundo unos resultados significativamente peores. Debido al uso generalizado de BLEU, hemos querido estudiar los resultados BLEU conseguidos por ambos sistemas, constatando que esta métrica no es efectiva ni para medir la calidad absoluta de un sistema, ni para comparar sistemas que usan estrategias diferentes. (A)

Mayor Serrano, María Blanca “Necesidades terminológicas del traductor de productos sanitarios evaluación de recursos (EN, ES).” Panace@: Revista de Medicina, Lenguaje y Traducción vol. 10, n. 31 (2010).  pp. 10-15. http://dialnet.unirioja.es/servlet/extart?codigo=3257681

A pesar de que la traducción de textos sobre productos sanitarios es sumamente compleja y requiere diversos tipos de conocimiento especializado, los investigadores apenas le han prestado atención desde un punto de vista terminológico, lo que da lugar a una significativa falta de productos terminográficos útiles para el traductor. Especialmente, se han pasado por alto las necesidades de los traductores, sobre las que ha de asentarse la metodología para la elaboración de tales productos.En este artículo presento una selección de recursos y analizo si atienden a las necesidades terminológicas de los traductores. Para su elaboración me han sido de gran utilidad los resultados obtenidos de una encuesta realizada en dos listas de debate sobre traducción médica (Tremédica y MedTrad).

O+Ógé¼Gäóbrien, Sharon “Methodologies for Measuring the Correlations between Post-Editing Effort and Machine Translatability.” Machine Translation vol. 19, n. 1 (2005).  pp. 37-58. http://dx.doi.org/10.1007/s10590-005-2467-1

Abstract Against the background of a wider research project that aims to investigate the correlation, if any, between post-editing effort and the presence of negative translatability indicators in source texts submitted to Machine Translation (MT), this paper sets out to assess the potential of two methods for measuring the effort involved in post-editing MT output. The first is based on the use of the keyboard-monitoring program Translog; the second on Choice Network Analysis (CNA). The paper reviews relevant research in both machine translatability and MT post-editing, and appraises, in particular, the suitability of think-aloud protocols in assessing post-editing effort. The combined use of Translog and CNA is proposed as a way of overcoming some of the difficulties presented by the use of think-aloud protocols in the current context. Initial results from a study conducted at Dublin City University confirm that triangulating data from Translog and CNA can cast light on the temporal, cognitive and technical aspects of post-editing effort.

Orozco, M. and A. H. Albir “Measuring Translation Competence Acquisition.” Meta vol. 47, n. 3 (2002).  pp.:

Measuring Translation Competence Acquisition

Orsted, Jeannette “Quality and Efficiency : Incompatible Elements in Translation Practice.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/003766ar.pdf

The aim of this article is to describe the quality assessment procedures in a large, national translation company. The company is more than ten years old, but the past five years’ growth rates have been rapidly increasing. The growth in turnover can be attributed both to a high degree of customer loyalty based on a high leve! of efficiency and trust, and on high, well-defined and transparent quality standards. The company is based on the idea that translators should function in a working environment based on full-term employment. Consequently the increase in turnover has involved recruiting a large number of translators and support services in the IT -department. This is why quality assessment procedures are no longer an individual responsibility, but have become a corporate issue. Quality procedures must therefore be part of the daily routines and involve all aspects of the business. To understand the conditions of the translation market today, the author provides an overview of the market based on the ASSIM-study and information on the new economy. After that she presents the case ofTranslation House of Scandinavia and finally she discusses some of the possible quality assurance systems that are available today and are used by the translation industry.

Ortín, Marcel “EIs Dickens de Josep Carner i eIs seus crítics.” Quaderns vol. 7, n. (2002).  pp.: http://ddd.uab.es/search.py?&cc=quaderns&f=issue&p=11385790n7&rg=100&sf=fpage&so=a&as=0&sc=0&ln=ca

Entre els anys 1928 i 1931, en plena maduresa literma, Josep Camer va ocupar-se en la tradució de tres de les novel.les roajors de Dickens: Pickwick Papers, David Copperfield i Great Expectations. Totes tres anaven destinades a la Biblioteca «A tot vent», la col.lecció de novela amb que van estrenar-se les Edicions Proa. Camer hi va portar una reflexió, sobre els requeriments de la llengua literma i sobre les virtualitats de l’ art de traduir, en la qual havia anat aprofundint alllarg de trenta anys d’exercici. Els resultats que va obtenir amb Dickens cal analitzarlos a la llum d’aquesta reflexió, que pot donar raó de moltes solucions concretes. Des d’aquí és possible resseguir la controversia recent sobre la qualitat real de les traduccions, i començar a plantejar el difícil problema de l’ avaluació en l’ ambit de la traducció literaria.

Owczarzak, Karolina, Josef Van Genabith, et al. “Evaluating machine translation with LFG dependencies.” Machine Translation vol. 21, n. 2 (2007).  pp. 95-119. http://dx.doi.org/10.1007/s10590-008-9038-1

Abstract In this paper we show how labelled dependencies produced by a Lexical-Functional Grammar parser can be used in Machine Translation evaluation. In contrast to most popular evaluation metrics based on surface string comparison, our dependency-based method does not unfairly penalize perfectly valid syntactic variations in the translation, shows less bias towards statistical models, and the addition of WordNet provides a way to accommodate lexical differences. In comparison with other metrics on a Chinese+óGé¼GÇ£English newswire text, our method obtains high correlation with human scores, both on a segment and system level.

Paegelow, Richard S. “Ten Reasons Why Good Translations Sometimes Fail.” Translorial-Online vol., n. (1998).  pp.: http://www.ncta.org/displaycommon.cfm?an=1&subarticlenbr=24

Ten Reasons Why Good Translations Sometimes Fail

Pinto Molina, María “Quality Factors in Documentary Translation.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/003840ar.pdf

Well aware of the difficulties involved in integrating translating models and quality systems, we offer an overview of relevant developments in the field. Particular emphasis is placed on the pragmatic connotations of translation and on the methodological aspects of the Quality Paradigm, an approach to documentary translation that focuses activity on the target user.

Pius Ten, Hacken “Has There Been a Revolution in Machine Translation?” Machine translation vol. 16, n. 1 (2001).  pp.: http://ipsapp009.lwwonline.com/content/getfile/4598/13/1/abstract.htm

When we compare the contributions on MT in the proceedings of Coling 1988 and Coling-ACL 1998, it seems obvious that in the period between them a revolution has taken place. Often this intuition is formulated as the replacement of linguistic approaches by statistical approaches. On closer inspection, however, this position cannot be defended. An analysis of Rosetta, concentrating on the different levels of discussion and of underlying assumptions, shows that the choice of knowledge from linguistic theories or information theory and corpora is by itself not a decisive issue. More important is the question of how the problem to be solved by an MT system is defined. An analysis of the decisions underlying Verbmobil, resulting in a list corresponding point by point to the one for Rosetta, shows how far-reaching the new approach to defining the problem of MT is. As it is shown that these systems are representative of the work in MT as it was done ten years ago and today, it can reasonably be argued that a revolution in MT has taken place, though not in exactly the way it is often believed.

Pöchhacker, Franz “Quality Assessment in Conference and Community Interpreting.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/003847ar.pdf

On the assumption that interpreting can and should be viewed within a conceptual spectrum from international to intra-social spheres of interaction, and that high standards of quality need to be ensured in any ofits professional domains, the paper surveys the state Iof the art in interpreting studies in search of conceptual and methodological tools for the empirical study and assessment of quality. 8ased on a selective review of research approaches and findings for various aspects of quality and types of interpreting, it is argued that there is enough common ground to hope for some cross-fertilization between research on quality assessment in different areas along the typological spectrum of inter. preting activity.

Robinson, Bryan “‘Las ruinas circulares’ de Jorge Luis Borges.” Sendebar. Boletín de la Facultad de Traductores e Interpretes de Granada vol., n. 13 (2002).  pp.:

La traducción de un texto literario escrito por un autor tan consciente del discurso como lo era çBorges, requiere por parte del traductor un análisis especialmente riguroso de ese texto y de los procesos implicados en su lectura. La traducción de ‘Las ruinas circulares’ por James Irby (Yates & Irby 1970) es una lograda versión del original centrada en el texto, aunque su decisión de no hacer una traducción centrada en el lector es contraria al propósito comunicativo perseguido por Borges en el original. Esto se pone de manifiesto gracias al uso de herramientas proporcionadas por el análisis del discurso en la realización de este ejercicio de crítica de traducción. (A.)

Rosenmund, Alain “Konstruktive Evaluation : Versuch eines Evaluationskonzepts für den Unterricht.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/003987ar.pdf

AII translations should be assessed with regard to their context and aim. Therefore, and in order to objectivize the assessment as well as to prepare the students for the professional environment, the assessment of students’ translations should be based on specifications which have been worked out by the professor or lecturer and students beforehand.

Ruiz Rosendo, Lucía “La evaluación de Ia calidad en interpretación desde la perspectiva del usuario. Los congresos de medicina.” Sendebar. Boletín de la Facultad de Traductores e Interpretes de Granada vol., n. 16 (2005).  pp.:

Este artículo tiene como objetivo principal describir el estado de la calidad en interpretación, centrándonos en los congresos de medicina, Para ello, hemos dividido el trabajo que nos ocupa en distintos apartados, tres apartados más generales y un apartado específico de la interpretación en congresos de medicina: (1) análisis de las definiciones existentes del concepto de calidad; (2) estudio de los parámetros más relevantes que influyen y condicionan la calidad de la interpretación; (3) breve análisis de los estudios experimentales o empíricos pioneros realizados en el ámbito de la calidad desde la perspectiva del usuario y (4) análisis de los estudios de calidad realizados en el ámbito de la medicina desde la perspectiva del usuario. Por último, hemos incluido un apartado en el que exponemos brevemente los resultados de un estudio empírico que hemos realizado entre intérpretes especializados en congresos de medicina, centrándonos en los resultados sobre los criterios de evaluación de la calidad en congresos médicos

Selvaggini, Luisa and Alessandro Finzi “Analisi della correlazione tra giudizio estetico e valutazione di fedeltà all’originale in traduzioni dallo spagnolo.” Scrittura e riscrittura. Traduzioni, refundiciones, parodie e plagi: Atti del Convegno di Roma [Associazione Ispanisti Italiani] vol., n. (1995).  pp. 131-140. http://dialnet.unirioja.es/servlet/extart?codigo=2349302

Scrittura e riscrittura. Traduzioni, refundiciones, parodie e plagi: Atti del Convegno di Roma [Associazione Ispanisti Italiani], 1995, ISBN 88-7119-761-5,

Shashok, Karen “La calidad en el Servicio de Traducción de la Comisión Europea.” Panacea : boletín de medicina y traducción vol. 5, n. 16 (2004).  pp.: http://www.medtrad.org/panacea/PanaceaAnteriores.htm

Por gentileza de los organizadores, dos medtraderos pudimos asistir a la conferencia de Emma Wagner titulada «The Quest for Translation Quality in International Organizations» durante las IV Jornadas sobre la Formación y la Profesión del Traductor e Intérprete, organizadas por la Universidad Europea de Madrid (España; véase al respecto, en las páginas 183-186 de este número de Panace@, el artículo de Cáceres Würsig, Pérez González y Strotmann). Wagner trabajó para la Comisión Europea (CE) durante treinta años como traductora, correctora y directora del Servicio de Traducción (SdT), y ha destacado por su actitud crítica frente al lenguaje burocrático, opaco y recargado, uno de los grandes obstáculos para la buena traducción.

Sherwin, Ann C. “Buzzword or Bonanza? A Translator Reflects on Best Practice.” The Translation Journal vol. 10, n. 2 (2005).  pp.: http://accurapid.com/journal/

There’s no doubt that ‘best practice’ is a hot topic today. The exact phrase brings nearly 40 million hits with Google, including 16 sponsored links related to sales and marketing, education, research, manufacturing, information science, health care, and more. Amazon.com lists over 2300 books with ‘best practice’ as a keyword. To me it was pretty much just a buzzword. It sounded good, and I assumed it was an apt description of the way I ran my business

St. Andre, James “Between Tongues. Translation and/of/in Performance in Asia.” Target: International Journal on Translation Studies vol. 21, n. 2 (2009).  pp. 403-405. http://www.ingentaconnect.com/content/jbp/targ/2009/00000021/00000002/art00016

Jennifer Lindsay, ed. Between Tongues. Translation and/of/in Performance in Asia. Singapore: Singapore University Press, 2006. xvi + 302 pp. ISBN 9971-69-339-9. 28 USD. Reviewed by James St. André (Manchester)

Steiner, Erich “A Register-Based Translation Evaluation: An Advertisement as a Case in Point.” Target: International Journal on Translation Studies vol. 10, n. 2 (1998).  pp.:

Se estudian los elementos que no deben faltar en una evaluación de traducciones basada en el análisis del registro utilizado. En la primera sección se aboga por un enfoque eminentemente teórico en lo que respecta a la evaluación de traducciones, aunque también se tiene en cuenta el ámbito más general de la lingüística. En las secciones 2, 3 y 4 se analizan los aspectos concretos del campo, el tenor y el modo, mientras que en la 5 se expone que para evaluar una traducción también será necesario acudir a la lingüística comparativa y a las tipologías textuales. Por último, se insiste en que este tipo de evaluación acerca a la traducción y a la cogeneración, con lo que resulta posible establecer vínculos entre la calidad de las traducciones y la de otros textos en general.

Valero Garcés, Carmen “Cómo evaluar la competencia traductora. Varias propuestas.” Congrés Internacional sobre Traducció vol., n. 2 (1994).  pp.: http://ddd.uab.es/pub/traduccio/Actes4.pdf

El concepto de “buen traductor” es inherente a cualquier discusión en el campo de los estudios de traducción. Los formadores de traductores deben creer en ciertas características implícitas que tipifican a dicho profesional, de acuerdo con las cuales diseñan sus programas, seleccionan tipos de textos y materiales y aplican los procedimientos evaluativos que consideran apropiados. La primera pregunta que surge es qué debe saber y qué destrezas debe desarrollar y dominar el futuro traductor para poder traducir. De este modo se plantea el debate sobre la competencia del traductor y el modo de adquirir dicha cualidad.

Vanderschelden, Isabelle “Quality Assessment and Literary Transiation in France.” The Translator vol. 6, n. 2 (2000).  pp.: http://www.stjerome.co.uk/periodicals/viewfile.php?id=195&type=pdf

This article examines the current place of literary translation in the French literary polysystern. By considering the perspectives of various parties, such as publishers, literary translators and book reviewers, its objective is to survey the impact of trunslated literature in France and to explore the visibility of the literary translator and of translated literature. More specifically, the article raises the issue of quality assessment of translations published in France and analyzes sorne of the criteria applied both explicitly and implicitly when literary texts in translation are evaluated. The arguments developed here are based mainly on information collected about French publishers and literary translators from interviews or other accounts, and also on recent reviews of translated literature published in the French press.

Varela Salinas, María-José and Encarnación Postigo Pinazo “La evaluación en los estudios de traducción.” The Translation Journal vol. 9, n. 1 (2005).  pp.: http://accurapid.com/journal/31evaluacion.htm

Mejorar las posibilidades de evaluación en el campo de la traducción supone uno de los retos más importantes, ya que la evaluación del rendimiento académico es imprescindible, tanto por su imposición institucional como por la misma naturaleza de la actividad académica. Los problemas que habitualmente se le plantean al docente son diversos. Uno de los mayores es el de la subjetividad y la percepción personal tanto del evaluador como del evaluado a la hora de valorar el resultado de un proceso de enseñanza-aprendizaje, para el que en el campo de la traducción aún no hay suficientes criterios sistematizados. Esto reside, en parte, en una falta de conciencia de qué es lo que se tiene y se puede enseñar en las clases de traducción y, por tanto, de lo que es lo evaluable en una prueba académica de traducción (Goff-Kfouri, 2004).

Verdegal, Joan “Los neologismos literarios y sus efectos en traducción: Propuesta analítico-evaluadora de la distorsión (contexto francés-español/francés-catalán).” Sendebar. Boletín de la Facultad de Traductores e Interpretes de Granada vol., n. 13 (2002).  pp.:

Vidal, Mirta “NAJIT Certification on the Way.” Proteus vol. 9, n. 3 (2000).  pp.: http://www.najit.org/proteus/v9n3/vidal_v9n3.htm

Some of you may think NAJIT’s efforts to create a certification program for judiciary interpreters has been a long time coming. Actually, most of you were not even members when the idea began to be seriously considered. I remember sitting with Dagoberto Orrantia and Janis Palma, who was then chair, in a restaurant in San Juan nine years ago having a heated argument about whether or not we should have an exam, what kind of an exam, and how it could be done. And that was only the first of many heated arguments-as Cristina will remember because the importance of the issue makes people very passionate about the subject.

Viola Rodrigues, Sara “Translation quality: a Housian analysis.” Meta vol. 41, n. 2 (1996).  pp.: http://www.erudit.org/revue/meta/1996/v41/n2/003969ar.pdf

Se analiza el modelo de evaluación de traducciones ideado en 1981 por Juliane House. A pesar de que se ha quedado un poco anticuado, es el que mejor ha funcionado hasta el momento y supone una gran evolución con respecto a los que existían anteriormente.

Waddington, Christopher “Different Methods of Evaluating Student Translations : The Question of Validity.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/004583ar.pdf

This article examines the criterion-related validity of the results obtained by the application of four different methods of assessment to the correction of a second-year exam of translation into the foreign language (Spanish-English) done by 64 university students. These four methods are based on types currently used by university teachers, and the validation study is based on 17 external criteria taken from six different sources. In spite of this variety, a factor analysis reveals the presence of one main factor which is clearly identifiable as Trans/ation Competence. The hypotheses regarding differences between the validity of the methods are verified as null, since all the systems, whether based on error analysis or a holistic approach, prove to correlate significantly with this main factor.

Waddington, C. “Measuring the effect of errors on translation quality.” Lebende Sprachen vol. 51, n. 2 (2006).  pp.:

Measuring the effect of errors on translation quality

Way, Andy and Nano Gough “Controlled Translation in an Example-based Environment: What do Automatic Evaluation Metrics Tell Us?” Machine Translation vol. 19, n. 1 (2005).  pp. 1-36. http://dx.doi.org/10.1007/s10590-005-1403-8

Abstract This paper presents an extended, harmonised account of our previous work on integrating controlled language data in an Example-based Machine Translation system. Gough and Way in MT Summit pp. 133+óGé¼GÇ£140 (2003) focused on controlling the output text in a novel manner, while Gough and Way (9th Workshop of the EAMT, (2004a), pp. 73+óGé¼GÇ£81) sought to constrain the input strings according to controlled language specifications. Our original sub-sentential alignment algorithm could deal only with 1:1 matches, but subsequent refinements enabled n:m alignments to be captured. A direct consequence was that we were able to populate the system+óGé¼Gäós databases with more than six times as many potentially useful fragments. Together with two simple novel improvements +óGé¼GÇ£ correcting a small number of mistranslations in the lexicon, and allowing multiple translations in the lexicon +óGé¼GÇ£ translation quality improves considerably. We provide detailed automatic and human evaluations of a number of experiments carried out to test the quality of the system. We observe that our system outperforms the rule-based on-line system Logomedia on a range of automatic evaluation metrics, and that the +óGé¼-£best+óGé¼Gäó translation candidate is consistently highly ranked by our system. Finally, we note in a number of tests that the BLEU metric gives objectively different results than other automatic evaluation metrics and a manual evaluation. Despite these conflicting results, we observe a preference for controlling the source data rather than the target translations.

Williams, Malcolm “The Application of Argumentation Theory to Translation Quality Assessment.” Meta vol. 46, n. 2 (2001).  pp.: http://www.erudit.org/revue/meta/2001/v46/n2/004605ar.pdf

Translation quality assessment (TQA) models mar be divided into two main types: (1) models with a quantitative dimension, such as SEPT (1979) and Sical (1986), and (2) non-quantitative, textological models, such as Nord (1991) and House (1997). Because it tends to focus on microtextual (sampling, subsentence) analysis and error counts, Type 1 suffers from some major shortcomings. First, because of time constraints, it cannot assess, except on the basis of statistical probabilities, the acceptability of the content of the translation as a whole. Second, the microtextual analysis inevitably hinders any serious assessment of the content macrostructure of the translation. Third, the establishment of an acceptability threshold based on a specific number of errors is vulnerable to criticism both theoretically and in the marketplace. Type 2 cannot offer a cogent acceptability threshold either, precisely because it does not propose error weighting and quantification for individual texts. What is needed is an approach that combines the quantitative and textological dimensions, along the lines proposed by Bensoussan and Rosenhouse (1990) and Larose (1987, 1998). This article outlines a project aimed at making further progress in this direction through the application of argumentation theory to instrumental translations

Yoshimi, Takehiko “Improvement of Translation Quality of English Newspaper Headlines by Automatic Pre-editing.” Machine translation vol. 16, n. 4 (2001).  pp.: http://www.springerlink.com/media/n0fprkwvtn4cc2nhxnby/contributions/r/3/5/1/r35132023577045u.pdf

Since the headlines of English news articles have a characteristic style, different from the styles which prevail in ordinary sentences, it is difficult for MT systems to generate high-quality translation for headlines. We try to solve this problem by adding to an existing system a pre-editing module which rewrites headlines as ordinary expressions. Rewriting of headlines makes it possible to generate better translations which would not otherwise be generated, with little or no changes to the existing parts of the system. Focusing on the absence of a form of the verb be as a missing part of normal English, we have described rewriting rules for putting properly the verb be into headlines, based on information obtained by morpho-lexical and rough syntactic analysis. We have incorporated the proposed method into our English–Japanese MT system, and carried out an experiment with 312 headlines as unknown data. Our method has satisfactorily marked 81.2% recall and 92.0% precision.

Young-Suk, Lee, Daniel J. Sinder, et al. “Interlingua-based English–Korean Two-way Speech Translation of Doctor–Patient Dialogues with CCLINC.” Machine translation vol. 17, n. 3 (2002).  pp.: http://ipsapp009.kluweronline.com/IPS/content/ext/x/J/4598/I/20/A/1/abstract.htm #

Development of a robust two-way real-time speech translation system exposes researchers and system developers to various challenges of machine translation (MT) and spoken language dialogues. The need for communicating in at least two different languages poses problems not present for a monolingual spoken language dialogue system, where no MT engine is embedded within the process flow. Integration of various component modules for real-time operation poses challenges not present for text translation. In this paper, we present the CCLINC (Common Coalition Language System at Lincoln Laboratory) English–Korean two-way speech translation system prototype trained on doctor–patient dialogues, which integrates various techniques to tackle the challenges of automatic real-time speech translation. Key features of the system include (i) language–independent meaning representation which preserves the hierarchical predicate–argument structure of an input utterance, providing a powerful mechanism for discourse understanding of utterances originating from different languages, word-sense disambiguation and generation of various word orders of many languages, (ii) adoption of the DARPA Communicator architecture, a plug-and-play distributed system architecture which facilitates integration of component modules and system operation in real time, and (iii) automatic acquisition of grammar rules and lexicons for easy porting of the system to different languages and domains. We describe these features in detail and present experimental results.

Yuen Wan, Ngan and Kong Wai Ping “The Effectiveness of Electronic Dictionaries as a Tool for Translators.” Babel: Revue internationale de la traduction/International Journal of Translation vol. 43, n. 2 (1997).  pp.:

In view of the growing popularity of electronic dictionaries, the Consumer Council of Hong Kong, a statutory body financed by annual subvention from the Government of Hong Kong to protect and promote the interests of the consumers of goods and services, conducted a survey to evaluate the effectiveness of the various functions of 15 models of electronic dictionaries available in 1994. The authors, who served as consultants in all language-related aspects of this survey, will evaluate the usefulness of these dictionaries to translators on the basis of the survey findings. Their vocabulary database in the realms of difficult, modern, and scientific and technical words as well as phrases will be explored in that lists of words and phrases are meticulously compiled before the words and phrases are checked in the dictionaries. Moreover, as two electronic dictionaries claim that they could translate English sentences into Chinese, different types of sentences are tested to see whether or not they are able to produce satisfactory translations.

Zequan, Liu “Translation Quality Assessment.” The Translation Journal vol. 7, n. 3 (2003).  pp.: http://accurapid.com/journal/25register.htm

Register, or context of situation as it is formally termed, ‘is the set of meanings, the configuration of semantic patterns, that are typically drawn upon under the specific conditions, along with the words and structures that are used in the realization of these meanings’ (Halliday, 1978:23). It is concerned with the variables of field, tenor, and mode, and is a useful abstraction which relates variations of language use to variations of social context. Therefore, register analysis of linguistic texts, which enables us to uncover how language is manoeuvred to make meaning, has received popular application in (critical) discourse analysis and (foreign) language teaching pedagogy

Post a comment or leave a trackback: Trackback URL.

Trackbacks

Responder

Introduce tus datos o haz clic en un icono para iniciar sesión:

Logo de WordPress.com

Estás comentando usando tu cuenta de WordPress.com. Cerrar sesión / Cambiar )

Imagen de Twitter

Estás comentando usando tu cuenta de Twitter. Cerrar sesión / Cambiar )

Foto de Facebook

Estás comentando usando tu cuenta de Facebook. Cerrar sesión / Cambiar )

Google+ photo

Estás comentando usando tu cuenta de Google+. Cerrar sesión / Cambiar )

Conectando a %s

A %d blogueros les gusta esto: